- Add code in Solr such that the admin may configure a limit on how many documents is way too many to hold in single Solr core and kick-off an automated process to:
- Either, CREATE another core (same or separate machine?) and add it to the ZooKeeper configuration with a weight that signifies that all new addition should happen to this new core's index only. Though I wonder how an update (delete+add) would work?
- Or, begin sharding the existing core. This could be done by CREATE-ing a copy of it (core_copy) and distributing the index into two shards (core_shard1,core_shard2) using a scheme/policy that does so in a best-effort manner such that the scoring wouldn't get thrown off by too much due to each individual shard's differing IDF. Then SWAP the two sharded-cores in as replacement for the overloaded core. What would happen to any changes made during this process?
- Either, CREATE another core (same or separate machine?) and add it to the ZooKeeper configuration with a weight that signifies that all new addition should happen to this new core's index only. Though I wonder how an update (delete+add) would work?
Friday, September 9, 2011
My Solr Cloud Wishlist
Subscribe to:
Post Comments (Atom)
0 comments:
Post a Comment