Re: advice on loading and searching large map in memory
On Sat, 19 Feb 2011, eunever32@yahoo.co.uk wrote:
We have a requirement to query across two disparate systems. Both
systems are read-only so no need for updates and once loaded and no need
to check for updates. I would plan to reload the data afresh each day.
Records on both systems map one-one and each has 7million records.
The first system is legacy and I am reluctant to redevelop (C code).
The second is standard Java/tomcat/SQL
The non-relational query can return up to 1000 records.
This could therefore result in 1000 queries to the relational system
(just one table) before returning to the user.
Unless you batch them. Can you not do something like:
Collection<LegacyResult> legacyResults = queryLegacySystem();
Iterator<LegacyResult> legacyResultsIterator = legacyResults.iterator();
Collection<CombinedResult> combinedResults = new ArrayList<CombinedResult>();
Connection conn = openDatabaseConnection();
// NB i'm not closing anything after use, but you would have to
PreparedStatement newSystemQuery = conn.prepareStatement("select * from sometable where item_id in (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)");
while (legacyResultsIterator.hasNext()) {
Map<String, LegacyResult> batch = new HashMap<LegacyResult>(10);
for (int i = 1; i <= 10; ++i) {
// NB i'm not dealing with the end of the iterator, but you would have to
LegacyResult legacyResult = legacyResultsIterator.next();
String id = legacyResult.getItemID();
batch.put(id, legacyResult);
newSystemQuery.setString(i, id);
}
ResultSet rs = newSystemQuery.executeQuery();
while (rs.next()) {
NewSystemResult newResult = makeNewResultFromResultRow(rs);
LegacyResult legacyResult = batch.get(newResult.getID());
CombinedResult combinedResult = new CombinedResult(legacyResult, newResult);
combinedResults.add(combinedResult);
}
}
Where the batch size might be considerably more than 10?
To avoid 1000 relational queries I was planning to "cache" the entire
relational table in memory. I was planning to have a web service which
would load the entire relational table into memory. The web service,
running in a separate tomcat could then be queried 1000 times or maybe
get a single request with 1000 values and return all results in one go.
Having a separate tomcat process would help to isolate any memory issues
eg JVM heap size.
Can people recommend an approach?
Because the entire set of records would always be in memory does that
make using something like ehcache pointless?
I think you could use EHCache or similar *instead* of writing your own
cache server.
How big are your objects? If they're a kilobyte each (largeish, for an
object), then seven million will take up seven gigs of memory; if they're
100 bytes (pretty tiny), then they'll take up 700 MB. That's before any
overhead. The former will require you to have a machine with a lot of
memory if you want to avoid thrashing; even the latter means taking a good
chunk of memory just for the cache.
tom
--
And the future is certain, give us time to work it out