Introduction

VIDIZMO offers multiple broker service options to manage real-time data feeds and enable smooth communication between microservices. Kafka is one of the broker services used in VIDIZMO. Apache Kafka, commonly known as a message broker service, is a distributed event streaming platform specially designed for handling real-time data feeds. 


VIDIZMO, with its complex event processing needs, finds value in the seamless integration and communication offered by Event-Driven Architecture (EDA). The incorporation of Apache Kafka, a distributed event streaming platform, enhances data processing and communication within VIDIZMO's ecosystem. By utilizing Kafka as a broker service, VIDIZMO establishes itself as a central component for inter-communication among its various services. This approach supports real-time data streaming and event-driven architecture and ensures reliable communication between different system components. 


Kafka in VIDIZMO: A Broker's Role

VIDIZMO leverages Kafka's powerful message brokering capabilities to enable its event-driven architecture. In VIDIZMO, the application employs a publish/subscribe architecture using the Kafka messaging system for seamless communication between its microservices. Kafka acts as the central message-routing component. When an API endpoint generates an event, it is transmitted to Kafka, where one or more subscribers await to process the incoming messages. Kafka serves as the core mechanism for message exchange, handling requests for services, responses, and exception notifications between client and server.



Requests are structured as messages within the Kafka ecosystem, utilizing its API for communication. Additionally, Kafka manages error handling in response to reported exceptions, ensuring robust reliability. VIDIZMO's support for Kafka as the messaging system offers multiple options, enhancing flexibility in handling real-time data feeds and facilitating effective communication among microservices.


Kafka Features Used by VIDIZMO

Kafka acts as a broker that efficiently handles message flows between various components. The core functionalities of Kafka seamlessly align with the VIDIZMO architecture requirements. 

  1. Message Delivery: 

For efficient data distribution, VIDIZMO utilizes topic partitioning. VIDIZMO web app acts as a Producer, publishing events and generating messages, e.g., processing tasks. Each message stream is categorized into a logical channel called a topic. Each topic is further divided into partitions for horizontal scaling and fault tolerance. Different system components, acting as consumers, subscribe to relevant topics and promptly receive these messages for further processing. 


2. Scalability and Performance: 

Kafka decouples producers and consumers, enabling independent scaling and development. Producers publish messages without waiting for consumers to process them, and consumers can access messages at their own pace. 


3. Decoupled Microservices: 

Producers and consumers operate independently, fostering loose coupling and modularity within architecture. This simplifies the development, maintenance, and independent scaling of individual components.


Benefits of Kafka as a Broker Service 

  1. Real-time processing: Enables efficient workflows and immediate reactions to events. 
  2. High Throughput: Kafka handles high-velocity and high-volume data efficiently, supporting thousands of messages per second 
  3. Flexibility: Easily integrates new components and workflows into the architecture. 
  4. Fault Tolerance: Kafka automatically handles broker failures by re-electing a new leader and replicating data to the new leader, minimizing downtime and data loss. 
  5. Cost-efficiency: Provides a scalable and efficient infrastructure solution.

Shortcoming of KAFKA 

  1. Setting up and maintaining Kafka can be challenging.
  2. Kafka relies on Zookeeper for coordination.


Configuring Kafka in VIDIZMO

 Prerequisites 

  1. Follow the official Apache Kafka documentation to complete the installation and setup of Kafka on the server where VIDIZMO is deployed. 
  2. This is specific to on-premises VIDIZMO deployments and requires Administrator privileges. 


Step 1: Launch Kafka using Docker 

To facilitate seamless management and version control of Apache Kafka, we recommend deploying Kafka using Docker. Utilizing Docker offers the advantage of effortlessly switching between Kafka versions by updating the underlying docker-compose file. 

  1. Install Docker and ensure Docker Desktop is installed on your system by referring to the official Docker documentation
  2. Download the Kafka Docker-Compose File by Obtaining the Kafka docker-compose file from the official GitHub repository
  3. Navigate to the Downloaded Folder to the location where the docker-compose file is downloaded. 
  4. Execute the command `docker-compose -f zk-single-kafka-single.yml up -d` in the command prompt to initiate Kafka containers. 
Step 2: Configuration in VIDIZMO  
To configure broker service in the VIDIZMO application, please follow the instructions given below:   
  1. Login to the VIDIZMO application with Administrator privileges. Navigate to the Navigation Menu.  
  2. Click on the Control Panel. 


3. Select the Application Configuration. 

4. Click on the VIDIZMO Runtime Configuration.



 5. Click 'Edit' to change/update the existing configuration.  



6. Under Event System, select Kafka from the drop-down menu.  




  • BootStrap Servers: In the designated field, input the bootstrap server's information using the format "localhost: KafkaPortNumber." 
  • Specify the Zookeeper Port by entering the port number, which can be obtained from Docker Desktop. Refer to the Zookeeper container in Docker Desktop, where the port number is listed in the port column. This ensures proper communication with the Zookeeper service. To get info about the requisite port for Kafka, refer to the prerequisites for the VIDIZMO Application.  
7. Click update to save configuration settings.  


Considerations 

When configuring Apache Kafka as a broker service in VIDIZMO, there are several considerations to ensure optimal performance, reliability, and scalability. Here are some key aspects to keep in mind: 
  • Ensure that the version of Apache Kafka you are using is compatible with the version of VIDIZMO. 
  • Configure storage settings, including the location and size of Kafka logs. Ensure that there is sufficient disk space to handle the expected message volume. 
  • Set up monitoring tools to track the performance and health of the Kafka cluster.


Troubleshooting 

Here are some troubleshooting tips you may encounter when deploying KAFKA.

  • Ensure ZooKeeper is running and accessible to Kafka brokers. Verify connection details in Kafka configurations. 
  • Leverage Kafka monitoring tools to check broker health, message throughput, and resource utilization to identify bottlenecks or anomalies.