Cloud instance start time comparison [closed]Comparison of cloud hosting providersMoving all internal servers to the cloudBest failover strategy for e-mail servers on AWS to ensure high availabilityRails AWS Architecture - migrating from single Linode machine to AWScan two cloud infrastructures connectGoogle Cloud AutoScaler usage and instance health detectionAccess external IP google cloud windows instanceGoogle Cloud VM Windows instance wallpaper Snapshot Time:eliminate boot time of cloud instanceHow to automatically start a cloud windows instance periodically?
tikz drawing rectangle discretized with triangle lattices and its centroids
Can my American children re-enter the USA by International flight with a passport card? Being that their passport book has expired
Why are lawsuits between the President and Congress not automatically sent to the Supreme Court
Do high-wing aircraft represent more difficult engineering challenges than low-wing aircraft?
Understanding Deutch's Algorithm
Should I communicate in my applications that I'm unemployed out of choice rather than because nobody will have me?
What is this weird d12 for?
Wireless headphones interfere with Wi-Fi signal on laptop
What was Varys trying to do at the beginning of S08E05?
Can I say: "When was your train leaving?" if the train leaves in the future?
My bread in my bread maker rises and then falls down just after cooking starts
Could a space colony 1g from the sun work?
Do people who work at research institutes consider themselves "academics"?
Is random forest for regression a 'true' regression?
Why when I add jam to my tea it stops producing thin "membrane" on top?
Is there an academic word that means "to split hairs over"?
Why were the bells ignored in S8E5?
Were any toxic metals used in the International Space Station?
Why did Varys remove his rings?
Developers demotivated due to working on same project for more than 2 years
What do the "optional" resistor and capacitor do in this circuit?
Why does SSL Labs now consider CBC suites weak?
What do you call the hair or body hair you trim off your body?
Why is the marginal distribution/marginal probability described as "marginal"?
Cloud instance start time comparison [closed]
Comparison of cloud hosting providersMoving all internal servers to the cloudBest failover strategy for e-mail servers on AWS to ensure high availabilityRails AWS Architecture - migrating from single Linode machine to AWScan two cloud infrastructures connectGoogle Cloud AutoScaler usage and instance health detectionAccess external IP google cloud windows instanceGoogle Cloud VM Windows instance wallpaper Snapshot Time:eliminate boot time of cloud instanceHow to automatically start a cloud windows instance periodically?
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
Is anyone keeping track of the performance of instance start times from the various cloud providers (AWS, Azure, GCP etc.)?
Obviously this will depend on a lot of factors e.g. instance type, instance availability, operating system, definition of 'availability' etc. so a matrix and quartiles would be awesome (e.g. 98% of m1-small's running amazon linux in AWS in eu-west-1 are available in 34 seconds).
The reason I'm asking: I have a workload that happens intermittently but when it's needed, latency (i.e. start up time) is important. For cost reasons I'd prefer if the instance(s) aren't running when not used.
Unfortunately lambda's / web functions etc. won't work for me (although I'll be using them to start the instance(s)).
amazon-web-services azure google-cloud-platform cloud
closed as off-topic by Sven♦ May 4 at 14:58
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "Requests for product, service, or learning material recommendations are off-topic because they attract low quality, opinionated and spam answers, and the answers become obsolete quickly. Instead, describe the business problem you are working on, the research you have done, and the steps taken so far to solve it." – Sven
add a comment |
Is anyone keeping track of the performance of instance start times from the various cloud providers (AWS, Azure, GCP etc.)?
Obviously this will depend on a lot of factors e.g. instance type, instance availability, operating system, definition of 'availability' etc. so a matrix and quartiles would be awesome (e.g. 98% of m1-small's running amazon linux in AWS in eu-west-1 are available in 34 seconds).
The reason I'm asking: I have a workload that happens intermittently but when it's needed, latency (i.e. start up time) is important. For cost reasons I'd prefer if the instance(s) aren't running when not used.
Unfortunately lambda's / web functions etc. won't work for me (although I'll be using them to start the instance(s)).
amazon-web-services azure google-cloud-platform cloud
closed as off-topic by Sven♦ May 4 at 14:58
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "Requests for product, service, or learning material recommendations are off-topic because they attract low quality, opinionated and spam answers, and the answers become obsolete quickly. Instead, describe the business problem you are working on, the research you have done, and the steps taken so far to solve it." – Sven
add a comment |
Is anyone keeping track of the performance of instance start times from the various cloud providers (AWS, Azure, GCP etc.)?
Obviously this will depend on a lot of factors e.g. instance type, instance availability, operating system, definition of 'availability' etc. so a matrix and quartiles would be awesome (e.g. 98% of m1-small's running amazon linux in AWS in eu-west-1 are available in 34 seconds).
The reason I'm asking: I have a workload that happens intermittently but when it's needed, latency (i.e. start up time) is important. For cost reasons I'd prefer if the instance(s) aren't running when not used.
Unfortunately lambda's / web functions etc. won't work for me (although I'll be using them to start the instance(s)).
amazon-web-services azure google-cloud-platform cloud
Is anyone keeping track of the performance of instance start times from the various cloud providers (AWS, Azure, GCP etc.)?
Obviously this will depend on a lot of factors e.g. instance type, instance availability, operating system, definition of 'availability' etc. so a matrix and quartiles would be awesome (e.g. 98% of m1-small's running amazon linux in AWS in eu-west-1 are available in 34 seconds).
The reason I'm asking: I have a workload that happens intermittently but when it's needed, latency (i.e. start up time) is important. For cost reasons I'd prefer if the instance(s) aren't running when not used.
Unfortunately lambda's / web functions etc. won't work for me (although I'll be using them to start the instance(s)).
amazon-web-services azure google-cloud-platform cloud
amazon-web-services azure google-cloud-platform cloud
asked May 4 at 13:50
user44384user44384
1045
1045
closed as off-topic by Sven♦ May 4 at 14:58
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "Requests for product, service, or learning material recommendations are off-topic because they attract low quality, opinionated and spam answers, and the answers become obsolete quickly. Instead, describe the business problem you are working on, the research you have done, and the steps taken so far to solve it." – Sven
closed as off-topic by Sven♦ May 4 at 14:58
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "Requests for product, service, or learning material recommendations are off-topic because they attract low quality, opinionated and spam answers, and the answers become obsolete quickly. Instead, describe the business problem you are working on, the research you have done, and the steps taken so far to solve it." – Sven
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
No, nothing useful operationally. The most rigorous cross cloud study I found was done in 2012 at the University of Virginia. A Performance Study on the VM Startup Time in the Cloud (DOI) A long time ago, before GCP existed as an IaaS offering and when Azure was branded Windows Azure!
Anecdotal blogs from a single provider are more common than multi cloud. Again, already out of date, no one maintains this continuously that I know of. But sometimes you can find a bunch of data points for example: Understanding and Profiling GCE cold-boot time
Do your own timing of your instance types with your boot image in your regions of your clouds. Probably will be one or two minutes to ssh, plus or minus some seconds.
Increasing capacity faster than about 120 seconds will require booting instances a little before you need them. Maybe automatically via an instance scale group. That's the price of low latency.
Thanks. Yeah, 2 minutes is painful but should be ok for latency. It feels like the kind of stat that would be interesting to track (e.g. for time to do a rolling upgrade). I suspect people are but just not publishing it.
– user44384
May 4 at 14:47
120 seconds is just a guess, measure it. Instance boot time is less interesting with a sufficiently large and automated fleet, where scale and upgrade changes happen constantly. If there always is enough capacity to serve the next request, response time rarely suffers during capacity scaling.
– John Mahowald
May 4 at 15:13
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
No, nothing useful operationally. The most rigorous cross cloud study I found was done in 2012 at the University of Virginia. A Performance Study on the VM Startup Time in the Cloud (DOI) A long time ago, before GCP existed as an IaaS offering and when Azure was branded Windows Azure!
Anecdotal blogs from a single provider are more common than multi cloud. Again, already out of date, no one maintains this continuously that I know of. But sometimes you can find a bunch of data points for example: Understanding and Profiling GCE cold-boot time
Do your own timing of your instance types with your boot image in your regions of your clouds. Probably will be one or two minutes to ssh, plus or minus some seconds.
Increasing capacity faster than about 120 seconds will require booting instances a little before you need them. Maybe automatically via an instance scale group. That's the price of low latency.
Thanks. Yeah, 2 minutes is painful but should be ok for latency. It feels like the kind of stat that would be interesting to track (e.g. for time to do a rolling upgrade). I suspect people are but just not publishing it.
– user44384
May 4 at 14:47
120 seconds is just a guess, measure it. Instance boot time is less interesting with a sufficiently large and automated fleet, where scale and upgrade changes happen constantly. If there always is enough capacity to serve the next request, response time rarely suffers during capacity scaling.
– John Mahowald
May 4 at 15:13
add a comment |
No, nothing useful operationally. The most rigorous cross cloud study I found was done in 2012 at the University of Virginia. A Performance Study on the VM Startup Time in the Cloud (DOI) A long time ago, before GCP existed as an IaaS offering and when Azure was branded Windows Azure!
Anecdotal blogs from a single provider are more common than multi cloud. Again, already out of date, no one maintains this continuously that I know of. But sometimes you can find a bunch of data points for example: Understanding and Profiling GCE cold-boot time
Do your own timing of your instance types with your boot image in your regions of your clouds. Probably will be one or two minutes to ssh, plus or minus some seconds.
Increasing capacity faster than about 120 seconds will require booting instances a little before you need them. Maybe automatically via an instance scale group. That's the price of low latency.
Thanks. Yeah, 2 minutes is painful but should be ok for latency. It feels like the kind of stat that would be interesting to track (e.g. for time to do a rolling upgrade). I suspect people are but just not publishing it.
– user44384
May 4 at 14:47
120 seconds is just a guess, measure it. Instance boot time is less interesting with a sufficiently large and automated fleet, where scale and upgrade changes happen constantly. If there always is enough capacity to serve the next request, response time rarely suffers during capacity scaling.
– John Mahowald
May 4 at 15:13
add a comment |
No, nothing useful operationally. The most rigorous cross cloud study I found was done in 2012 at the University of Virginia. A Performance Study on the VM Startup Time in the Cloud (DOI) A long time ago, before GCP existed as an IaaS offering and when Azure was branded Windows Azure!
Anecdotal blogs from a single provider are more common than multi cloud. Again, already out of date, no one maintains this continuously that I know of. But sometimes you can find a bunch of data points for example: Understanding and Profiling GCE cold-boot time
Do your own timing of your instance types with your boot image in your regions of your clouds. Probably will be one or two minutes to ssh, plus or minus some seconds.
Increasing capacity faster than about 120 seconds will require booting instances a little before you need them. Maybe automatically via an instance scale group. That's the price of low latency.
No, nothing useful operationally. The most rigorous cross cloud study I found was done in 2012 at the University of Virginia. A Performance Study on the VM Startup Time in the Cloud (DOI) A long time ago, before GCP existed as an IaaS offering and when Azure was branded Windows Azure!
Anecdotal blogs from a single provider are more common than multi cloud. Again, already out of date, no one maintains this continuously that I know of. But sometimes you can find a bunch of data points for example: Understanding and Profiling GCE cold-boot time
Do your own timing of your instance types with your boot image in your regions of your clouds. Probably will be one or two minutes to ssh, plus or minus some seconds.
Increasing capacity faster than about 120 seconds will require booting instances a little before you need them. Maybe automatically via an instance scale group. That's the price of low latency.
answered May 4 at 14:37
John MahowaldJohn Mahowald
9,9841714
9,9841714
Thanks. Yeah, 2 minutes is painful but should be ok for latency. It feels like the kind of stat that would be interesting to track (e.g. for time to do a rolling upgrade). I suspect people are but just not publishing it.
– user44384
May 4 at 14:47
120 seconds is just a guess, measure it. Instance boot time is less interesting with a sufficiently large and automated fleet, where scale and upgrade changes happen constantly. If there always is enough capacity to serve the next request, response time rarely suffers during capacity scaling.
– John Mahowald
May 4 at 15:13
add a comment |
Thanks. Yeah, 2 minutes is painful but should be ok for latency. It feels like the kind of stat that would be interesting to track (e.g. for time to do a rolling upgrade). I suspect people are but just not publishing it.
– user44384
May 4 at 14:47
120 seconds is just a guess, measure it. Instance boot time is less interesting with a sufficiently large and automated fleet, where scale and upgrade changes happen constantly. If there always is enough capacity to serve the next request, response time rarely suffers during capacity scaling.
– John Mahowald
May 4 at 15:13
Thanks. Yeah, 2 minutes is painful but should be ok for latency. It feels like the kind of stat that would be interesting to track (e.g. for time to do a rolling upgrade). I suspect people are but just not publishing it.
– user44384
May 4 at 14:47
Thanks. Yeah, 2 minutes is painful but should be ok for latency. It feels like the kind of stat that would be interesting to track (e.g. for time to do a rolling upgrade). I suspect people are but just not publishing it.
– user44384
May 4 at 14:47
120 seconds is just a guess, measure it. Instance boot time is less interesting with a sufficiently large and automated fleet, where scale and upgrade changes happen constantly. If there always is enough capacity to serve the next request, response time rarely suffers during capacity scaling.
– John Mahowald
May 4 at 15:13
120 seconds is just a guess, measure it. Instance boot time is less interesting with a sufficiently large and automated fleet, where scale and upgrade changes happen constantly. If there always is enough capacity to serve the next request, response time rarely suffers during capacity scaling.
– John Mahowald
May 4 at 15:13
add a comment |