Skip to main content

Architecture

Viduli's architecture is thoughtfully designed to accommodate the full spectrum of modern application patterns, from simple monolithic applications to complex, distributed microservices architectures. This flexibility ensures that teams can choose the architectural approach that best fits their specific requirements without being constrained by platform limitations.

The platform is optimized for global content delivery and performance. When users navigate to applications hosted on Viduli, their requests are intelligently routed through a sophisticated multi-layered architecture. Traffic first passes through our global Content Delivery Network (CDN), which directs requests to the nearest application gateway. Based on the specific route accessed, the application gateway leverages our service mesh to route traffic securely via mTLS encryption to the appropriate edge service. These edge services then communicate seamlessly with dependent business logic and data layer services to fulfill user requests, creating a robust and scalable request processing pipeline.

Viduli Architecture Diagram

Global CDN

A Content Delivery Network (CDN) is a geographically distributed network of servers that work together to provide fast delivery of internet content. Viduli's global CDN infrastructure serves as the first point of contact for user requests, strategically positioned at edge locations worldwide to minimize latency and maximize performance.

The CDN provides multiple critical benefits for applications hosted on Viduli. It dramatically reduces response times by serving content from the server closest to each user's geographic location, ensuring consistently fast load times regardless of where users are accessing your application from around the world.

Our CDN architecture incorporates robust fault tolerance mechanisms, automatically routing traffic away from any servers experiencing issues to maintain uninterrupted service availability. The system employs intelligent geo-load balancing algorithms that distribute traffic efficiently across multiple regions, preventing any single location from becoming overwhelmed while optimizing resource utilization.

Additionally, SSL termination occurs at the CDN edge, meaning encryption and decryption happen as close to users as possible. This approach significantly reduces latency while maintaining the highest security standards, as users benefit from the shortest possible encrypted connection path.

Application Gateway

The Application Gateway, also known as an API Gateway, serves as the intelligent traffic controller within Viduli's architecture. It acts as a single entry point for all client requests, responsible for routing HTTP requests to the appropriate backend servers based on request patterns, paths, and routing rules.

Beyond basic request routing, the Application Gateway is designed to support advanced traffic management features. While currently focused on core routing functionality, the gateway's architecture is prepared for future enhancements including rate limiting to prevent API abuse, comprehensive authorization mechanisms to secure access to different services, and request/response transformation capabilities.

High availability is built into the Application Gateway's core design. The system operates across multiple availability zones with automatic failover capabilities, ensuring that gateway failures never become a single point of failure for your applications. Load balancing algorithms distribute incoming requests across healthy gateway instances, maintaining consistent performance even during traffic spikes or individual component failures.

Service Mesh

The service mesh forms the backbone of secure and reliable service-to-service communication within Viduli's architecture. This dedicated infrastructure layer handles all inter-service communication, providing a comprehensive set of capabilities that would otherwise require individual services to implement these features themselves.

Security is paramount in the service mesh design. All service-to-service communication is automatically secured with mutual TLS (mTLS) encryption, ensuring that data remains protected as it travels between different components of your application. This zero-trust approach means that even internal communications are encrypted and authenticated, providing defense-in-depth security.

Traffic management capabilities enable sophisticated routing and load balancing strategies. The mesh supports advanced traffic splitting for canary deployments, intelligent load balancing across service instances, and automatic retry mechanisms for transient failures. These features allow for seamless deployment strategies and improved application resilience.

Observability is built into every aspect of the service mesh. Comprehensive metrics, structured logging, and distributed tracing provide deep insights into application behavior and performance. This visibility enables teams to quickly identify bottlenecks, troubleshoot issues, and optimize application performance without adding instrumentation code to individual services.

Access control and policy enforcement operate at a granular level, allowing administrators to define precisely which services can communicate with each other. This fine-grained control enhances security posture while maintaining the flexibility needed for complex application architectures.

Resilience features like circuit breaking and fault injection help build robust applications that can gracefully handle failures. The mesh automatically detects unhealthy services and routes traffic away from them, while fault injection capabilities enable teams to test their application's behavior under various failure scenarios.

The service mesh also simplifies service discovery and enables dynamic routing, automatically managing the complex task of locating and connecting services as they scale up or down. Additionally, it plays a crucial role in geographical load balancing, intelligently routing requests to the most appropriate service instances based on location and current load conditions.

Container Orchestrator

Containers represent a fundamental shift from traditional virtual machine-based infrastructure, offering significant advantages in terms of resource efficiency, deployment speed, and application portability. Unlike virtual machines that require a full operating system for each instance, containers share the host OS kernel while maintaining complete application isolation. This approach results in dramatically reduced resource overhead, faster startup times, and higher density deployments.

At Viduli, we firmly believe that containers have become the default solution for cloud-native application deployment. They provide the perfect balance of isolation, portability, and efficiency that modern applications demand. Containers enable developers to package applications with all their dependencies, ensuring consistent behavior across development, testing, and production environments while eliminating the "it works on my machine" problem.

Viduli leverages Kubernetes, the industry-leading container orchestration platform, to manage containerized workloads at scale. Kubernetes provides robust features including automated deployment and scaling, rolling updates, health monitoring, and self-healing capabilities. This proven orchestration layer ensures that your applications remain highly available and can automatically adapt to changing load conditions while maintaining optimal resource utilization.

Sidecar

The sidecar pattern is a crucial architectural component that extends the capabilities of individual application containers without modifying the application code itself. In Viduli's architecture, each application container is paired with a sidecar proxy that handles all network communication, security, and observability concerns.

This approach provides several key advantages. First, it completely separates infrastructure concerns from business logic, allowing developers to focus on application functionality while the sidecar handles cross-cutting concerns like encryption, load balancing, and monitoring. Second, it enables consistent behavior across all services regardless of the programming language or framework used, as the sidecar provides a uniform interface for all network operations.

The sidecar also enables zero-downtime deployments and advanced traffic management strategies. It can intelligently route traffic during deployments, perform health checks, and automatically retry failed requests, all without requiring changes to the application code.

Viduli implements the sidecar pattern using Envoy proxy, the industry-leading, high-performance proxy designed specifically for cloud-native applications. Envoy provides advanced load balancing, comprehensive observability features, and robust security capabilities. Its proven track record in large-scale production environments ensures that Viduli's sidecar implementation can handle the most demanding workloads while maintaining optimal performance and reliability.