|
SCT DAQ/DCS/Test SoftwareSpeed Comparisons: IPC vs IS vs PipesI thought it would be interesting to look at how fast the Online Software is. To this end, I wrote two small projects. All the code is very simple and very probably doesn't represent the absolute maximum in performance. However, I hope the results are indicative. The first, ISSpeedTest, creates 4 executables, 2 for IS transfer, 2 for piped transfer. It is available as a zip here. It also, usefully, shows how to publish objects in an IS server. We may well find this is easier and more flexible than using Online Histogramming (OH) services which are, after all, just a thin layer on top of IS. The second, IPCSpeedTest, uses IPC to test transfer speed. It is available here. This code also shows how to use sequences. Sequences are IDL templated types which are used to implement unbound arrays. Note that in IDL array types always have a definate bound defined in the IDL file. They are therefore extremely useful for returning e.g. lists.
TestsAll tests were run on the following machines:
All machines have 256 Mb of RAM and identical software installs. They are connected by a 100 Mb switched network. Many other programs were running on all the machines whilst the tests were run, so again, these results are just indicative.
Results
CommentThe result for pipes sets the benchmark by which CORBA can be compared. It is raw, fast and of course has virtually no features. It is therefore impressive to see that using IPC can achieve such speeds across a network whilst providing so much more functionality. Clearly the claims that CORBA/ILU has been optimized are born out although perhaps more could be done to improve performance when both processes are on the same machine. Unfortunately, the speed of IS leaves something to be desired. The performance of IPC shows that optimization of IS could lead to significant speed gains. This low performance is perhaps a little surprising given the statement from the July performance review: "...the IS component is responsible for ... a potentially high rate of experiment monitoring data". Those tests however showed it's performance in a masasively distributed system (228 client computers - 4100 requests/s and 684 simultaneous data providers). Links |
Last modified 15 February 2003 . Maintained by Matthew Palmer |