Blog Post

Tech Talk
1 MIN READ

Can we grab Multiple time range data from the RESTAPI?

Cole_McDonald's avatar
Cole_McDonald
Icon for Professor rankProfessor
12 months ago

I’m working on another thread to report some predictive models and metrics to drive engineer response actions around volume growth/expansion.  To do so, I’m calling the RESTAPI 10x per volume instance per device. (one datapoint value sample every 9 days back to 90).  To do so, I’m building out a request for each of the day’s values (today - 9 days, today - 18 days, etc...).

Is there a way to request multiple start/end values from the restAPI in a single request?

Published 12 months ago
Version 1.0
  • Anonymous's avatar
    Anonymous

    If you ran your API calls in parallel, that would help. There shouldn’t be any issue hitting the API that frequently because it’s different paths, right? Or is the only thing that changes the query params?

  • Bummer, test server for developing the DS used 10 API connections, next one I added had 9 volumes… so 90 API calls for a single device.  All of these happen from the collector as they don’t need access to the devices themselves at all.  I’ll think around it and see if I can figure out a different way to collect the data from the API more efficiently.

  • If you ran your API calls in parallel, that would help. There shouldn’t be any issue hitting the API that frequently because it’s different paths, right? Or is the only thing that changes the query params?

    Params are the only change.  parallel vs serial I would have the same issue, which is 10 API calls per volume.  So the tcp port lift becomes 10 x instances.  if there are 30 devices with an average of 3 volumes, that’s 900 TCP connections roughly the same time.  That’s a dent in the ephemeral stack and it’s all on the collector, so TCP is already the tight resource there.

  • Anonymous's avatar
    Anonymous

    not only that, but if the params are the only things that change, you run the risk of hitting api rate limits. Plus, what if you don’t have 90 days worth of data, do you quit making the other calls if you run into the start of the data?

    Was about to suggest you use the script cache to handle the distant past making it so you only call for the most recent data. However, that’s not available in powershell. You could implement your own cache using the file system (collector tasks can write to the file system). Just write the data in json out to a file. Check for the file every run and if it exists, pull in the json.