|
def replace_all_objects( |
|
self, |
|
index_name: str, |
|
objects: List[Dict[str, Any]], |
|
batch_size: int = 1000, |
|
scopes=["settings", "rules", "synonyms"], |
|
request_options: Optional[Union[dict, RequestOptions]] = None, |
|
) -> ReplaceAllObjectsResponse: |
I'm in the process of updating from v3 to v4 and noticed that this function no longer seems to support iterators, which will cause problems for our systems that use it to bulk replace large datasets. It's possible that making some fix inside of chunked_batch() is all thats needed to support iterators.
The legacy/v1 API used to support iterators: https://www.algolia.com/doc/libraries/sdk/v1/methods/replace-all-objects#param-objects
Python: Use an iterator instead of a list to prevent memory issues, especially if you want to replace many records.
- Is this an intentional design decision?
- Short of re-architecting our syncrhonization strategy, do you have any recommended best practice for those of us using replace_all_objects on sets of data that are too large to fit in a list/memory?
Thanks!
algoliasearch-client-python/algoliasearch/search/client.py
Lines 5990 to 5997 in 1f0ed35
I'm in the process of updating from v3 to v4 and noticed that this function no longer seems to support iterators, which will cause problems for our systems that use it to bulk replace large datasets. It's possible that making some fix inside of chunked_batch() is all thats needed to support iterators.
The legacy/v1 API used to support iterators: https://www.algolia.com/doc/libraries/sdk/v1/methods/replace-all-objects#param-objects
Thanks!