Help with Streaming and Chunk Processing for Large JSON Data (60 GB) from Kenna API
Asif Ali Hirekumbi
asifali.ha at gmail.com
Fri Sep 27 02:17:12 EDT 2024
Dear Python Experts,
I am working with the Kenna Application's API to retrieve vulnerability
data. The API endpoint provides a single, massive JSON file in gzip format,
approximately 60 GB in size. Handling such a large dataset in one go is
proving to be quite challenging, especially in terms of memory management.
I am looking for guidance on how to efficiently stream this data and
process it in chunks using Python. Specifically, I am wondering if there’s
a way to use the requests library or any other libraries that would allow
us to pull data from the API endpoint in a memory-efficient manner.
Here are the relevant API endpoints from Kenna:
- Kenna API Documentation
<https://apidocs.kennasecurity.com/reference/welcome>
- Kenna Vulnerabilities Export
<https://apidocs.kennasecurity.com/reference/retrieve-data-export>
If anyone has experience with similar use cases or can offer any advice, it
would be greatly appreciated.
Thank you in advance for your help!
Best regards
Asif Ali
More information about the Python-list
mailing list