Name |
Last commit
|
Last update |
---|---|---|
config | ||
lib | ||
.gitignore | ||
pymongoexport_csv.py | ||
pymongoexport_json.py | ||
requirements.txt |
Found that old pagination system based on skip() and limit() scaled terribly bad for large collections. However, if the indexing isn't based on making a skip but instead in asking for the tweets with a higher or lesser value for one field, the query is much much faster. Thus, using the "id" unique field as index pagination and retrieval system can work for large collections.
Name |
Last commit
|
Last update |
---|---|---|
config | Loading commit data... | |
lib | Loading commit data... | |
.gitignore | Loading commit data... | |
pymongoexport_csv.py | Loading commit data... | |
pymongoexport_json.py | Loading commit data... | |
requirements.txt | Loading commit data... |