| Nome: | Descrição: | Tamanho: | Formato: | |
|---|---|---|---|---|
| 9.98 MB | Adobe PDF |
Autores
Orientador(es)
Resumo(s)
Este trabalho descreve, implementa e avalia uma arquitetura distribuída de monitorização de tráfego com múltiplas sondas, concebida para garantir escalabilidade, resiliência e ingestão eficiente de dados. A motivação decorre das limitações de abordagens centralizadas, que degradam sob carga elevada. A solução adota uma arquitetura em camadas: sondas contentorizadas com Suricata e Fluent Bit publicam eventos EVE JSON no Apache Kafka; no nó central, o Logstash normaliza e enriquece os registos, que são indexados no OpenSearch e visualizados no Grafana. A metodologia compara duas topologias em Kubernetes: uma consolidada, com serviços e sonda num único pod para validação rápida, e outra segmentada, que separa hosts, serviços e sonda por namespace utilizando bridge proxy com Multus para aproximar o tráfego de produção. Os ensaios com replays de PCAP e tráfego adversarial escalaram de uma para cinco e dez sondas, medindo latência fim a fim, utilização de recursos e atraso de filas. Os resultados indicam operação estável e consulta próxima do tempo real até cinco sondas, enquanto com dez sondas o nó central satura CPU e I/O, elevando o backlog e o tempo de indexação mesmo com aumento de partições no Kafka e consumidores no Logstash. Conclui-se que a abordagem é viável e
reproduzível, e que a sua evolução exige escala horizontal do núcleo analítico, políticas de ciclo de vida de índices e metas operacionais de latência e atraso, a par de validação em rede produtiva via SPAN ou TAP.
This work describes, implements, and evaluates a distributed traffic monitoring architecture with multiple probes, designed to ensure scalability, resilience, and efficient data ingestion. The motivation stems from the limitations of centralized approaches, which degrade under high load. The solution adopts a layered architecture: containerized probes with Suricata and Fluent Bit publish EVE JSON events to Apache Kafka; at the central node, Logstash normalizes and enriches the logs, which are indexed in OpenSearch and visualized in Grafana. The methodology compares two topologies in Kubernetes: a consolidated one, with services and probes in a single pod for quick validation, and a segmented one, which separates hosts, services, and probes by namespace using a bridge proxy with Multus to approximate production traffic. Tests with PCAP replays and adversarial traffic scaled from one to five and ten probes, measuring end-to-end latency, resource utilization, and queue delay. The results indicate stable operation and near realtime querying up to five probes, while with ten probes the central node saturates CPU and I/O, increasing backlog and indexing time even with increased partitions in Kafka and consumers in Logstash. It is concluded that the approach is feasible and reproducible, and that its evolution requires horizontal scaling of the analytical core, index lifecycle policies, and operational latency and delay targets, along with validation in a production network via SPAN or TAP.
This work describes, implements, and evaluates a distributed traffic monitoring architecture with multiple probes, designed to ensure scalability, resilience, and efficient data ingestion. The motivation stems from the limitations of centralized approaches, which degrade under high load. The solution adopts a layered architecture: containerized probes with Suricata and Fluent Bit publish EVE JSON events to Apache Kafka; at the central node, Logstash normalizes and enriches the logs, which are indexed in OpenSearch and visualized in Grafana. The methodology compares two topologies in Kubernetes: a consolidated one, with services and probes in a single pod for quick validation, and a segmented one, which separates hosts, services, and probes by namespace using a bridge proxy with Multus to approximate production traffic. Tests with PCAP replays and adversarial traffic scaled from one to five and ten probes, measuring end-to-end latency, resource utilization, and queue delay. The results indicate stable operation and near realtime querying up to five probes, while with ten probes the central node saturates CPU and I/O, increasing backlog and indexing time even with increased partitions in Kafka and consumers in Logstash. It is concluded that the approach is feasible and reproducible, and that its evolution requires horizontal scaling of the analytical core, index lifecycle policies, and operational latency and delay targets, along with validation in a production network via SPAN or TAP.
Descrição
Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do Paraná
Palavras-chave
Monitorização de tráfego NIDS Kafka OpenSearch
