...
Only using 155MB after reboot (with index in place).
One day later it’s up to dark bar at 291MB – still under 512MB.
The SearchStax NDN1
Immediately after a reboot (with a full index), this is what our SearchStax NDN1 reports in Solr dashboard.
...
App was accessible the whole time manually, although slow. No Solr OOM or other errors reported.
Just for comparison…
Both using current ansible-managed infrastructure. Using our “realistic” list of URLs….
SearchStax NDN1
Code Block |
---|
$ URLS=./500_urls.txt wrk --latency -c 10 -t 1 -d 1m -s load_test/multiplepaths.lua.txt https://staging-digital.sciencehistory.org/
multiplepaths: Found 480 paths
multiplepaths: Found 480 paths
Running 1m test @ https://staging-digital.sciencehistory.org/
1 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 335.10ms 182.49ms 1.62s 77.95%
Req/Sec 31.26 15.92 100.00 65.78%
Latency Distribution
50% 297.99ms
75% 383.49ms
90% 570.78ms
99% 958.27ms
1853 requests in 1.00m, 66.83MB read
Requests/sec: 30.83
Transfer/sec: 1.11MB |
Original/Current Solr
Code Block |
---|
$ URLS=./500_urls.txt wrk --latency -c 10 -t 1 -d 1m -s load_test/multiplepaths.lua.txt https://staging-digital.sciencehistory.org/ multiplepaths: Found 480 paths multiplepaths: Found 480 paths Running 1m test @ https://staging-digital.sciencehistory.org/ 1 threads and 10 connections Thread Stats Avg Stdev Max +/- Stdev Latency 355.13ms 221.74ms 1.94s 78.84% Req/Sec 30.25 15.66 90.00 68.51% Latency Distribution 50% 301.96ms 75% 412.95ms 90% 650.28ms 99% 1.12s 1784 requests in 1.00m, 64.37MB read Requests/sec: 29.71 Transfer/sec: 1.07MB |
Look pretty similar.
Conclusion?
I think we’re fine with NDN1.
It’s hard to be sure of this though, Java memory use is hard to understand/predict.
If notwe are wrong, we can always upgrade to NDN2 at any time. Even if we do an annual contract for NDN1, we can always upgrade it to NDN2. We can’t rule out that we may need to do this in the future – especially if our traffic were to drastically increase.