How does Orestes perform for a cloud computing scenario? To measure the performance gain of the Orestes protocol and our implementation, we designed a benchmark scenario:
Scenario social networking
OO model inheritance, aggregation, etc.
Access pattern transactional navigation
Database Versant Object Database (VOD)
Persistence API Java Data Objects (JDO)
Protocols VOD TCP, Orestes
Caches JDO L2 (VOD), web caches (Orestes)
Concurrency single client, 50 parallel clients
The configurable transactional JDO benchmark client performs a navigating access pattern by serially and randomly loading objects stored in the database (either drawn from a uniform or Zipf distribution) and writing others. The above figure shows the results of one of the benchmark cases we conducted: 50 client machines deployed in the Amazon EC2 cloud in Europe accessing a Versant database located in California through a web-cache in Europe. The clients perform the benchmark simultaneously, reading 450 and writing 50 objects in three consecutive runs. For the sake of simplicity we compare the average runtimes of the complete benchmark for JDO over Orestes and JDO with the native VOD TCP protocol (which uses local client caching).
These are the results we obtained for the average runtime of 50 Clients simultaneously loading 450 uniformly and randomly chosen objects out of a given number of objects (300, 3000, 30000) in 3 runs, while writing 50 other objects. These results unmistakably show the ability of Orestes to scale reads, reduce latency and disburden the database.
How about single client performance? We tested the same benchmark for a single client scenario using different workloads and web caches. The results show a signifcant performance boost. On the left hand side you can see how the total execution time decreases over three runs while the cache is warming up. On the right hand side, the case of an already warm cache is depicted and several web caches are compared:
Resources Links Contact