![]() ![]() If you already using Spring Cloud Discovery I’ve got good news for you. If you are interested in more details about ELK or Zipkin I recommend my previous article Part 2: Creating microservices – monitoring with Spring Cloud Sleuth, ELK and Zipkin. This applies to the logs when we usually use ELK stack (Elasticsearch + Logstash + Kibana), metrics (Zipkin) and details about the status of all application instances, which are running right now. If you have to manage the system consisting of multiple microservices you need to collect all relevant information in one place. ![]() Despite this, Spring Boot Admin seems to be an interesting enough to take a closer look on it. More about this solution can be read in my previous article Spring Cloud Microservices at Pivotal Platform. It is worth emphasizing that application only allows monitoring and does not have such capabilities like creating new instances, restarting, so it is not a competition for the solutions like Pivotal Cloud Foundry. It is based on endpoints exposed by Spring Boot Actuator. It is a simple solution created to manage and monitor Spring Boot applications. A box that did a lot of sending and a lot of receiving on a NIC with shared buffers might not want to give too many to the receive side.A few days ago I came across an article about Spring Boot Admin framework. Probably not a biggie on a box with 4 GB of ram.įinally, you might want to check and see if there are independent send and receive ring buffers on your NIC or if they’re shared. ![]() In the case of 2048 descriptors of 2048-bytes each, that’s 4 MB of physical memory. One other thing to keep in mind is that the ring buffer probably occupies physical memory on the machine. Any latency apparently due to the ring buffer is really due to kernel interrupt service latency. The buffer is just there for loss avoidance. Instead, think of it as a consequence of having a kernel that takes time to service an interrupt. It’s best not to think of latency as a consequence of increasing the ring buffer size. Don’t go for a large ring buffer if your application would rather have loss than buffering latency. This also means that your application may be reading data from the NIC that arrived 24 ms or more ago and not know it. Going for efficient loss repair can add more latency than the largest-possible ring buffer latency, so going for the max often makes sense. It may not make sense to add more latency in the name of loss avoidance than you’d suffer repairing the loss. You can see that the kernel has between 1 ms and 24 ms (depending on frame size) to service a receive interrupt while still avoiding loss due to ring buffer overflow. Those times are computed from ring buffer size in frames * max frame size / wire speed. A ring buffer that size would take 24 ms to fill at 1 gbps with 1500-byte frames, but only 1 ms to fill with 64-byte frames (the smallest-possible frames). Some of the Intel NICs we’ve used max out at 2048 “descriptors” in the ring buffer. The big benefit of increasing the size of the receive ring buffer is loss avoidance when the kernel is slow at servicing interrupts. We’ve recently refreshed one of our documentation pages on interrupt coalescing, but we’ll stick to the specific question of sizing the receive ring buffer in this post. Interrupt coalescing settings can change the demand for ring buffer space (or “descriptors”). This buffer holds Ethernet frames that have arrived until the OS can read them. One such setting is the size of the receive ring buffer on a NIC. None of our code runs in the OS kernel, but many kernel and device driver settings impact the performance of our code in user space. ![]()
0 Comments
Leave a Reply. |