How does geography affect network latency?Network latency — how long does it take for a packet to travel halfway around the world?How much network latency is “typical” for east - west coast USA?Does routing distance affect performance significantly?How can I control the network latency?How does IPv4 Subnetting Work?Does routing distance affect performance significantly?Is real-time or synchronous replication possible over WAN link?How much network latency is “typical” for east - west coast USA?Network latency — how long does it take for a packet to travel halfway around the world?What are some good resources on scalable network design?How to find the lowest latency midpoint between two geographic locations?Hosted Network LatencyNetwork Latency and Timing as shown by Chrome

How did the USSR manage to innovate in an environment characterized by government censorship and high bureaucracy?

Why is this code 6.5x slower with optimizations enabled?

Is there a familial term for apples and pears?

New order #4: World

Validation accuracy vs Testing accuracy

A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?

declaring a variable twice in IIFE

Download, install and reboot computer at night if needed

What Brexit solution does the DUP want?

I’m planning on buying a laser printer but concerned about the life cycle of toner in the machine

What do you call something that goes against the spirit of the law, but is legal when interpreting the law to the letter?

Copycat chess is back

How does one intimidate enemies without having the capacity for violence?

How is it possible for user's password to be changed after storage was encrypted? (on OS X, Android)

Simulate Bitwise Cyclic Tag

Infinite past with a beginning?

Work Breakdown with Tikz

Is it possible to make sharp wind that can cut stuff from afar?

Example of a relative pronoun

Is Social Media Science Fiction?

How do you conduct xenoanthropology after first contact?

Why has Russell's definition of numbers using equivalence classes been finally abandoned? ( If it has actually been abandoned).

N.B. ligature in Latex

How can bays and straits be determined in a procedurally generated map?



How does geography affect network latency?


Network latency — how long does it take for a packet to travel halfway around the world?How much network latency is “typical” for east - west coast USA?Does routing distance affect performance significantly?How can I control the network latency?How does IPv4 Subnetting Work?Does routing distance affect performance significantly?Is real-time or synchronous replication possible over WAN link?How much network latency is “typical” for east - west coast USA?Network latency — how long does it take for a packet to travel halfway around the world?What are some good resources on scalable network design?How to find the lowest latency midpoint between two geographic locations?Hosted Network LatencyNetwork Latency and Timing as shown by Chrome






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








19















I have the option of hosting our database/web server at a managed hosting company on the East Coast (US) or one on the West Coast (US). Our company is based out of New York City, and both hosting providers give our box a dedicated T1 line.



How much of a performance hit (assuming all other factors are equal) would I be taking in terms of network latency if I went with the one on the west coast as opposed to the one on the east coast? I'm not too sure how geography affects internet speeds when the numbers and distances get really large (T1's and above, and thousands of miles).



Thanks!










share|improve this question






















  • Oh, how I envy your position. We've got a client in the Philippines who are running on ISDN as their only link for all their network traffic. You wanna talk latency, trying 700ms between then and us (in Australia) when there's NO traffic on the line :(

    – Mark Henderson
    Sep 2 '09 at 21:37

















19















I have the option of hosting our database/web server at a managed hosting company on the East Coast (US) or one on the West Coast (US). Our company is based out of New York City, and both hosting providers give our box a dedicated T1 line.



How much of a performance hit (assuming all other factors are equal) would I be taking in terms of network latency if I went with the one on the west coast as opposed to the one on the east coast? I'm not too sure how geography affects internet speeds when the numbers and distances get really large (T1's and above, and thousands of miles).



Thanks!










share|improve this question






















  • Oh, how I envy your position. We've got a client in the Philippines who are running on ISDN as their only link for all their network traffic. You wanna talk latency, trying 700ms between then and us (in Australia) when there's NO traffic on the line :(

    – Mark Henderson
    Sep 2 '09 at 21:37













19












19








19


1






I have the option of hosting our database/web server at a managed hosting company on the East Coast (US) or one on the West Coast (US). Our company is based out of New York City, and both hosting providers give our box a dedicated T1 line.



How much of a performance hit (assuming all other factors are equal) would I be taking in terms of network latency if I went with the one on the west coast as opposed to the one on the east coast? I'm not too sure how geography affects internet speeds when the numbers and distances get really large (T1's and above, and thousands of miles).



Thanks!










share|improve this question














I have the option of hosting our database/web server at a managed hosting company on the East Coast (US) or one on the West Coast (US). Our company is based out of New York City, and both hosting providers give our box a dedicated T1 line.



How much of a performance hit (assuming all other factors are equal) would I be taking in terms of network latency if I went with the one on the west coast as opposed to the one on the east coast? I'm not too sure how geography affects internet speeds when the numbers and distances get really large (T1's and above, and thousands of miles).



Thanks!







networking hosting internet latency






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Sep 2 '09 at 21:12









neezerneezer

4253727




4253727












  • Oh, how I envy your position. We've got a client in the Philippines who are running on ISDN as their only link for all their network traffic. You wanna talk latency, trying 700ms between then and us (in Australia) when there's NO traffic on the line :(

    – Mark Henderson
    Sep 2 '09 at 21:37

















  • Oh, how I envy your position. We've got a client in the Philippines who are running on ISDN as their only link for all their network traffic. You wanna talk latency, trying 700ms between then and us (in Australia) when there's NO traffic on the line :(

    – Mark Henderson
    Sep 2 '09 at 21:37
















Oh, how I envy your position. We've got a client in the Philippines who are running on ISDN as their only link for all their network traffic. You wanna talk latency, trying 700ms between then and us (in Australia) when there's NO traffic on the line :(

– Mark Henderson
Sep 2 '09 at 21:37





Oh, how I envy your position. We've got a client in the Philippines who are running on ISDN as their only link for all their network traffic. You wanna talk latency, trying 700ms between then and us (in Australia) when there's NO traffic on the line :(

– Mark Henderson
Sep 2 '09 at 21:37










6 Answers
6






active

oldest

votes


















10














There is a distance delay and all other things being equal (routing efficiency, processing overhead, congestion, etc.) a site on the west coast accessed by a host on the east coast is going to take longer than if that site is on the east coast but we're talking milliseconds here.






share|improve this answer

























  • Link down.......

    – Pacerier
    Feb 11 '17 at 10:33


















11














All other things equal, you will have additional 44 milliseconds of latency just because of the speed of light. Give or take 1/20 of a second for each packet roundtrip. Not much for typical web usage. Passable for ssh sessions. Substantial if you access your DB directly with a lot of small consecutive transactions.



I've ignored extra latency caused by additional routers/repeaters, which could be much, much higher. I've assumed the distance 4400 km and speed of light in fiber 200000 km/s.






share|improve this answer























  • 1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good.

    – Ryaner
    Sep 2 '09 at 22:08











  • @Ryaner, What do you mean by ~"Fibre repeaters have zero latency"? How is this possible?

    – Pacerier
    Feb 11 '17 at 10:19












  • lightwaveonline.com/articles/print/volume-29/issue-6/feature/… covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers.

    – Ryaner
    Feb 13 '17 at 11:24


















5














We had a client that we spent a fair bit of time going around and around with related to this. They originally were hosted in New York, and their staff is mostly located in the Boston area. They were moving their servers to our facility located in Denver, about two-thirds of the way across the country.



Once they moved, they started bringing up performance problems from their Comcast links in home offices. They used to have <10ms latency, and it went up to 80-ish ms. They noticed slower performance reaching their sites, but said "maybe we are just going to have to put up with going from blazingly fast to mere mortal speeds." They seemed to realize that there were limitations because of the geography, and that their users on the west coast would be potentially getting better performance.



We went back and forth a few times. After around 6 months, we switched to a different primary upstream ISP, for reasons unrelated to this client (better pricing, more bandwidth, unhappy with the number of maintenance windows on the other provider), and with the new provider we were getting around 45ms average latency for this client. At this point their performance concerns seem to have gone away.



Just to give you some experience about one case where this sort of issue was seen and the numbers related to it.



Try using "mtr" to show information about the latency and packet loss to the different remote ends. Unless you fully understand "slow path" routing, ignore anything but the last hop listed on that output. Van Jacobson says that humans notice latency starting at 400ms, but realize that many connections require multiple back-and-forth exchanges, so a 100ms latency can quickly add up to a second...



From my experience, 250ms latency starts to feel like a noticeably slow connection. 10ms or better feels like a blazing connection. It really depends on what you're doing.



Sean






share|improve this answer























  • Denver, meaning CO?

    – Pacerier
    Feb 11 '17 at 10:22











  • Btw, "humans notice latency starting at 400ms" is blatantly false. Talk to any gamer and you'd see that a latency of 65ms (and above) is completely unacceptable.

    – Pacerier
    Feb 11 '17 at 10:28



















3














Well, packets travel down the wire at the close enough to the speed of light that raw transmission time is negligible when compared to other factors. What matters is efficiency of routing and how fast routing devices can do the routing. That unfortunately can't be determined purely based on geographical distance. There is a strong correlation between distance and latency, but there is not hard and fast rule that I am aware of.






share|improve this answer

























  • Is there anyway to determine the efficiency of the routing? Is there some sort of test I could run on the two servers to see that, and what would that number mean?

    – neezer
    Sep 2 '09 at 21:29






  • 2





    Packets travel at best .66 (optical) and ~.55 (copper) times the speed of light.

    – Noah Campbell
    Sep 2 '09 at 21:34











  • @Noah - Is that better?

    – EBGreen
    Sep 2 '09 at 21:38











  • Better than the speed of light? No, it's about half the speed of light.

    – Noah Campbell
    Sep 2 '09 at 21:45











  • I mean, is my edit a better description.

    – EBGreen
    Sep 2 '09 at 21:47


















3














The number of hops between point A and point B will introduce latency. Count the number of hops since this is your best indicator.



A few words of caution. Methods for evaluating the network path are not consistent with how the actual packet will flow. ICMP may be routed and given a difference QoS. Also, traceroute typically looks in one direction, i.e. source to destination. Here are some handy tricks.



For traceroute, try using -I, -U or -T to see how the path varies. Also look at -t 16 or -t 8. traceroute



Ping is actually pretty helpful. ping -R will show you the path that it takes to return! If it differs from the path going out, then see where it is going. ping






share|improve this answer






























    2














    I think geography will have a lot to do with packet transmitting time since the further you go, the the more hops you will most-likely add affecting overall latency. If your customers are going to be based mostly on the west-coast, then I'd go for the west-coast hosting... Same thing on the east-coast. If your customers will be coming from all over the US, or the world... then you'll just have to make the hard decision as to which side gets the less latency.



    In our case, we're on our own Network (one big intranet), and are able to let our routers make decisions based on OSPF throughout the state :) Unfortunately, anything off our network relies primarily on our ISPs layout.






    share|improve this answer























    • A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: serverfault.com/questions/21048/…

      – l0c0b0x
      Sep 2 '09 at 21:47











    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "2"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f61719%2fhow-does-geography-affect-network-latency%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    6 Answers
    6






    active

    oldest

    votes








    6 Answers
    6






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    10














    There is a distance delay and all other things being equal (routing efficiency, processing overhead, congestion, etc.) a site on the west coast accessed by a host on the east coast is going to take longer than if that site is on the east coast but we're talking milliseconds here.






    share|improve this answer

























    • Link down.......

      – Pacerier
      Feb 11 '17 at 10:33















    10














    There is a distance delay and all other things being equal (routing efficiency, processing overhead, congestion, etc.) a site on the west coast accessed by a host on the east coast is going to take longer than if that site is on the east coast but we're talking milliseconds here.






    share|improve this answer

























    • Link down.......

      – Pacerier
      Feb 11 '17 at 10:33













    10












    10








    10







    There is a distance delay and all other things being equal (routing efficiency, processing overhead, congestion, etc.) a site on the west coast accessed by a host on the east coast is going to take longer than if that site is on the east coast but we're talking milliseconds here.






    share|improve this answer















    There is a distance delay and all other things being equal (routing efficiency, processing overhead, congestion, etc.) a site on the west coast accessed by a host on the east coast is going to take longer than if that site is on the east coast but we're talking milliseconds here.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Feb 11 '17 at 16:01

























    answered Sep 2 '09 at 21:30









    joeqwertyjoeqwerty

    96.6k465149




    96.6k465149












    • Link down.......

      – Pacerier
      Feb 11 '17 at 10:33

















    • Link down.......

      – Pacerier
      Feb 11 '17 at 10:33
















    Link down.......

    – Pacerier
    Feb 11 '17 at 10:33





    Link down.......

    – Pacerier
    Feb 11 '17 at 10:33













    11














    All other things equal, you will have additional 44 milliseconds of latency just because of the speed of light. Give or take 1/20 of a second for each packet roundtrip. Not much for typical web usage. Passable for ssh sessions. Substantial if you access your DB directly with a lot of small consecutive transactions.



    I've ignored extra latency caused by additional routers/repeaters, which could be much, much higher. I've assumed the distance 4400 km and speed of light in fiber 200000 km/s.






    share|improve this answer























    • 1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good.

      – Ryaner
      Sep 2 '09 at 22:08











    • @Ryaner, What do you mean by ~"Fibre repeaters have zero latency"? How is this possible?

      – Pacerier
      Feb 11 '17 at 10:19












    • lightwaveonline.com/articles/print/volume-29/issue-6/feature/… covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers.

      – Ryaner
      Feb 13 '17 at 11:24















    11














    All other things equal, you will have additional 44 milliseconds of latency just because of the speed of light. Give or take 1/20 of a second for each packet roundtrip. Not much for typical web usage. Passable for ssh sessions. Substantial if you access your DB directly with a lot of small consecutive transactions.



    I've ignored extra latency caused by additional routers/repeaters, which could be much, much higher. I've assumed the distance 4400 km and speed of light in fiber 200000 km/s.






    share|improve this answer























    • 1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good.

      – Ryaner
      Sep 2 '09 at 22:08











    • @Ryaner, What do you mean by ~"Fibre repeaters have zero latency"? How is this possible?

      – Pacerier
      Feb 11 '17 at 10:19












    • lightwaveonline.com/articles/print/volume-29/issue-6/feature/… covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers.

      – Ryaner
      Feb 13 '17 at 11:24













    11












    11








    11







    All other things equal, you will have additional 44 milliseconds of latency just because of the speed of light. Give or take 1/20 of a second for each packet roundtrip. Not much for typical web usage. Passable for ssh sessions. Substantial if you access your DB directly with a lot of small consecutive transactions.



    I've ignored extra latency caused by additional routers/repeaters, which could be much, much higher. I've assumed the distance 4400 km and speed of light in fiber 200000 km/s.






    share|improve this answer













    All other things equal, you will have additional 44 milliseconds of latency just because of the speed of light. Give or take 1/20 of a second for each packet roundtrip. Not much for typical web usage. Passable for ssh sessions. Substantial if you access your DB directly with a lot of small consecutive transactions.



    I've ignored extra latency caused by additional routers/repeaters, which could be much, much higher. I've assumed the distance 4400 km and speed of light in fiber 200000 km/s.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Sep 2 '09 at 21:52









    kubanczykkubanczyk

    10.5k22845




    10.5k22845












    • 1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good.

      – Ryaner
      Sep 2 '09 at 22:08











    • @Ryaner, What do you mean by ~"Fibre repeaters have zero latency"? How is this possible?

      – Pacerier
      Feb 11 '17 at 10:19












    • lightwaveonline.com/articles/print/volume-29/issue-6/feature/… covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers.

      – Ryaner
      Feb 13 '17 at 11:24

















    • 1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good.

      – Ryaner
      Sep 2 '09 at 22:08











    • @Ryaner, What do you mean by ~"Fibre repeaters have zero latency"? How is this possible?

      – Pacerier
      Feb 11 '17 at 10:19












    • lightwaveonline.com/articles/print/volume-29/issue-6/feature/… covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers.

      – Ryaner
      Feb 13 '17 at 11:24
















    1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good.

    – Ryaner
    Sep 2 '09 at 22:08





    1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good.

    – Ryaner
    Sep 2 '09 at 22:08













    @Ryaner, What do you mean by ~"Fibre repeaters have zero latency"? How is this possible?

    – Pacerier
    Feb 11 '17 at 10:19






    @Ryaner, What do you mean by ~"Fibre repeaters have zero latency"? How is this possible?

    – Pacerier
    Feb 11 '17 at 10:19














    lightwaveonline.com/articles/print/volume-29/issue-6/feature/… covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers.

    – Ryaner
    Feb 13 '17 at 11:24





    lightwaveonline.com/articles/print/volume-29/issue-6/feature/… covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers.

    – Ryaner
    Feb 13 '17 at 11:24











    5














    We had a client that we spent a fair bit of time going around and around with related to this. They originally were hosted in New York, and their staff is mostly located in the Boston area. They were moving their servers to our facility located in Denver, about two-thirds of the way across the country.



    Once they moved, they started bringing up performance problems from their Comcast links in home offices. They used to have <10ms latency, and it went up to 80-ish ms. They noticed slower performance reaching their sites, but said "maybe we are just going to have to put up with going from blazingly fast to mere mortal speeds." They seemed to realize that there were limitations because of the geography, and that their users on the west coast would be potentially getting better performance.



    We went back and forth a few times. After around 6 months, we switched to a different primary upstream ISP, for reasons unrelated to this client (better pricing, more bandwidth, unhappy with the number of maintenance windows on the other provider), and with the new provider we were getting around 45ms average latency for this client. At this point their performance concerns seem to have gone away.



    Just to give you some experience about one case where this sort of issue was seen and the numbers related to it.



    Try using "mtr" to show information about the latency and packet loss to the different remote ends. Unless you fully understand "slow path" routing, ignore anything but the last hop listed on that output. Van Jacobson says that humans notice latency starting at 400ms, but realize that many connections require multiple back-and-forth exchanges, so a 100ms latency can quickly add up to a second...



    From my experience, 250ms latency starts to feel like a noticeably slow connection. 10ms or better feels like a blazing connection. It really depends on what you're doing.



    Sean






    share|improve this answer























    • Denver, meaning CO?

      – Pacerier
      Feb 11 '17 at 10:22











    • Btw, "humans notice latency starting at 400ms" is blatantly false. Talk to any gamer and you'd see that a latency of 65ms (and above) is completely unacceptable.

      – Pacerier
      Feb 11 '17 at 10:28
















    5














    We had a client that we spent a fair bit of time going around and around with related to this. They originally were hosted in New York, and their staff is mostly located in the Boston area. They were moving their servers to our facility located in Denver, about two-thirds of the way across the country.



    Once they moved, they started bringing up performance problems from their Comcast links in home offices. They used to have <10ms latency, and it went up to 80-ish ms. They noticed slower performance reaching their sites, but said "maybe we are just going to have to put up with going from blazingly fast to mere mortal speeds." They seemed to realize that there were limitations because of the geography, and that their users on the west coast would be potentially getting better performance.



    We went back and forth a few times. After around 6 months, we switched to a different primary upstream ISP, for reasons unrelated to this client (better pricing, more bandwidth, unhappy with the number of maintenance windows on the other provider), and with the new provider we were getting around 45ms average latency for this client. At this point their performance concerns seem to have gone away.



    Just to give you some experience about one case where this sort of issue was seen and the numbers related to it.



    Try using "mtr" to show information about the latency and packet loss to the different remote ends. Unless you fully understand "slow path" routing, ignore anything but the last hop listed on that output. Van Jacobson says that humans notice latency starting at 400ms, but realize that many connections require multiple back-and-forth exchanges, so a 100ms latency can quickly add up to a second...



    From my experience, 250ms latency starts to feel like a noticeably slow connection. 10ms or better feels like a blazing connection. It really depends on what you're doing.



    Sean






    share|improve this answer























    • Denver, meaning CO?

      – Pacerier
      Feb 11 '17 at 10:22











    • Btw, "humans notice latency starting at 400ms" is blatantly false. Talk to any gamer and you'd see that a latency of 65ms (and above) is completely unacceptable.

      – Pacerier
      Feb 11 '17 at 10:28














    5












    5








    5







    We had a client that we spent a fair bit of time going around and around with related to this. They originally were hosted in New York, and their staff is mostly located in the Boston area. They were moving their servers to our facility located in Denver, about two-thirds of the way across the country.



    Once they moved, they started bringing up performance problems from their Comcast links in home offices. They used to have <10ms latency, and it went up to 80-ish ms. They noticed slower performance reaching their sites, but said "maybe we are just going to have to put up with going from blazingly fast to mere mortal speeds." They seemed to realize that there were limitations because of the geography, and that their users on the west coast would be potentially getting better performance.



    We went back and forth a few times. After around 6 months, we switched to a different primary upstream ISP, for reasons unrelated to this client (better pricing, more bandwidth, unhappy with the number of maintenance windows on the other provider), and with the new provider we were getting around 45ms average latency for this client. At this point their performance concerns seem to have gone away.



    Just to give you some experience about one case where this sort of issue was seen and the numbers related to it.



    Try using "mtr" to show information about the latency and packet loss to the different remote ends. Unless you fully understand "slow path" routing, ignore anything but the last hop listed on that output. Van Jacobson says that humans notice latency starting at 400ms, but realize that many connections require multiple back-and-forth exchanges, so a 100ms latency can quickly add up to a second...



    From my experience, 250ms latency starts to feel like a noticeably slow connection. 10ms or better feels like a blazing connection. It really depends on what you're doing.



    Sean






    share|improve this answer













    We had a client that we spent a fair bit of time going around and around with related to this. They originally were hosted in New York, and their staff is mostly located in the Boston area. They were moving their servers to our facility located in Denver, about two-thirds of the way across the country.



    Once they moved, they started bringing up performance problems from their Comcast links in home offices. They used to have <10ms latency, and it went up to 80-ish ms. They noticed slower performance reaching their sites, but said "maybe we are just going to have to put up with going from blazingly fast to mere mortal speeds." They seemed to realize that there were limitations because of the geography, and that their users on the west coast would be potentially getting better performance.



    We went back and forth a few times. After around 6 months, we switched to a different primary upstream ISP, for reasons unrelated to this client (better pricing, more bandwidth, unhappy with the number of maintenance windows on the other provider), and with the new provider we were getting around 45ms average latency for this client. At this point their performance concerns seem to have gone away.



    Just to give you some experience about one case where this sort of issue was seen and the numbers related to it.



    Try using "mtr" to show information about the latency and packet loss to the different remote ends. Unless you fully understand "slow path" routing, ignore anything but the last hop listed on that output. Van Jacobson says that humans notice latency starting at 400ms, but realize that many connections require multiple back-and-forth exchanges, so a 100ms latency can quickly add up to a second...



    From my experience, 250ms latency starts to feel like a noticeably slow connection. 10ms or better feels like a blazing connection. It really depends on what you're doing.



    Sean







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Sep 3 '09 at 3:11









    Sean ReifschneiderSean Reifschneider

    8,55531927




    8,55531927












    • Denver, meaning CO?

      – Pacerier
      Feb 11 '17 at 10:22











    • Btw, "humans notice latency starting at 400ms" is blatantly false. Talk to any gamer and you'd see that a latency of 65ms (and above) is completely unacceptable.

      – Pacerier
      Feb 11 '17 at 10:28


















    • Denver, meaning CO?

      – Pacerier
      Feb 11 '17 at 10:22











    • Btw, "humans notice latency starting at 400ms" is blatantly false. Talk to any gamer and you'd see that a latency of 65ms (and above) is completely unacceptable.

      – Pacerier
      Feb 11 '17 at 10:28

















    Denver, meaning CO?

    – Pacerier
    Feb 11 '17 at 10:22





    Denver, meaning CO?

    – Pacerier
    Feb 11 '17 at 10:22













    Btw, "humans notice latency starting at 400ms" is blatantly false. Talk to any gamer and you'd see that a latency of 65ms (and above) is completely unacceptable.

    – Pacerier
    Feb 11 '17 at 10:28






    Btw, "humans notice latency starting at 400ms" is blatantly false. Talk to any gamer and you'd see that a latency of 65ms (and above) is completely unacceptable.

    – Pacerier
    Feb 11 '17 at 10:28












    3














    Well, packets travel down the wire at the close enough to the speed of light that raw transmission time is negligible when compared to other factors. What matters is efficiency of routing and how fast routing devices can do the routing. That unfortunately can't be determined purely based on geographical distance. There is a strong correlation between distance and latency, but there is not hard and fast rule that I am aware of.






    share|improve this answer

























    • Is there anyway to determine the efficiency of the routing? Is there some sort of test I could run on the two servers to see that, and what would that number mean?

      – neezer
      Sep 2 '09 at 21:29






    • 2





      Packets travel at best .66 (optical) and ~.55 (copper) times the speed of light.

      – Noah Campbell
      Sep 2 '09 at 21:34











    • @Noah - Is that better?

      – EBGreen
      Sep 2 '09 at 21:38











    • Better than the speed of light? No, it's about half the speed of light.

      – Noah Campbell
      Sep 2 '09 at 21:45











    • I mean, is my edit a better description.

      – EBGreen
      Sep 2 '09 at 21:47















    3














    Well, packets travel down the wire at the close enough to the speed of light that raw transmission time is negligible when compared to other factors. What matters is efficiency of routing and how fast routing devices can do the routing. That unfortunately can't be determined purely based on geographical distance. There is a strong correlation between distance and latency, but there is not hard and fast rule that I am aware of.






    share|improve this answer

























    • Is there anyway to determine the efficiency of the routing? Is there some sort of test I could run on the two servers to see that, and what would that number mean?

      – neezer
      Sep 2 '09 at 21:29






    • 2





      Packets travel at best .66 (optical) and ~.55 (copper) times the speed of light.

      – Noah Campbell
      Sep 2 '09 at 21:34











    • @Noah - Is that better?

      – EBGreen
      Sep 2 '09 at 21:38











    • Better than the speed of light? No, it's about half the speed of light.

      – Noah Campbell
      Sep 2 '09 at 21:45











    • I mean, is my edit a better description.

      – EBGreen
      Sep 2 '09 at 21:47













    3












    3








    3







    Well, packets travel down the wire at the close enough to the speed of light that raw transmission time is negligible when compared to other factors. What matters is efficiency of routing and how fast routing devices can do the routing. That unfortunately can't be determined purely based on geographical distance. There is a strong correlation between distance and latency, but there is not hard and fast rule that I am aware of.






    share|improve this answer















    Well, packets travel down the wire at the close enough to the speed of light that raw transmission time is negligible when compared to other factors. What matters is efficiency of routing and how fast routing devices can do the routing. That unfortunately can't be determined purely based on geographical distance. There is a strong correlation between distance and latency, but there is not hard and fast rule that I am aware of.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Sep 2 '09 at 21:38

























    answered Sep 2 '09 at 21:19









    EBGreenEBGreen

    1,313910




    1,313910












    • Is there anyway to determine the efficiency of the routing? Is there some sort of test I could run on the two servers to see that, and what would that number mean?

      – neezer
      Sep 2 '09 at 21:29






    • 2





      Packets travel at best .66 (optical) and ~.55 (copper) times the speed of light.

      – Noah Campbell
      Sep 2 '09 at 21:34











    • @Noah - Is that better?

      – EBGreen
      Sep 2 '09 at 21:38











    • Better than the speed of light? No, it's about half the speed of light.

      – Noah Campbell
      Sep 2 '09 at 21:45











    • I mean, is my edit a better description.

      – EBGreen
      Sep 2 '09 at 21:47

















    • Is there anyway to determine the efficiency of the routing? Is there some sort of test I could run on the two servers to see that, and what would that number mean?

      – neezer
      Sep 2 '09 at 21:29






    • 2





      Packets travel at best .66 (optical) and ~.55 (copper) times the speed of light.

      – Noah Campbell
      Sep 2 '09 at 21:34











    • @Noah - Is that better?

      – EBGreen
      Sep 2 '09 at 21:38











    • Better than the speed of light? No, it's about half the speed of light.

      – Noah Campbell
      Sep 2 '09 at 21:45











    • I mean, is my edit a better description.

      – EBGreen
      Sep 2 '09 at 21:47
















    Is there anyway to determine the efficiency of the routing? Is there some sort of test I could run on the two servers to see that, and what would that number mean?

    – neezer
    Sep 2 '09 at 21:29





    Is there anyway to determine the efficiency of the routing? Is there some sort of test I could run on the two servers to see that, and what would that number mean?

    – neezer
    Sep 2 '09 at 21:29




    2




    2





    Packets travel at best .66 (optical) and ~.55 (copper) times the speed of light.

    – Noah Campbell
    Sep 2 '09 at 21:34





    Packets travel at best .66 (optical) and ~.55 (copper) times the speed of light.

    – Noah Campbell
    Sep 2 '09 at 21:34













    @Noah - Is that better?

    – EBGreen
    Sep 2 '09 at 21:38





    @Noah - Is that better?

    – EBGreen
    Sep 2 '09 at 21:38













    Better than the speed of light? No, it's about half the speed of light.

    – Noah Campbell
    Sep 2 '09 at 21:45





    Better than the speed of light? No, it's about half the speed of light.

    – Noah Campbell
    Sep 2 '09 at 21:45













    I mean, is my edit a better description.

    – EBGreen
    Sep 2 '09 at 21:47





    I mean, is my edit a better description.

    – EBGreen
    Sep 2 '09 at 21:47











    3














    The number of hops between point A and point B will introduce latency. Count the number of hops since this is your best indicator.



    A few words of caution. Methods for evaluating the network path are not consistent with how the actual packet will flow. ICMP may be routed and given a difference QoS. Also, traceroute typically looks in one direction, i.e. source to destination. Here are some handy tricks.



    For traceroute, try using -I, -U or -T to see how the path varies. Also look at -t 16 or -t 8. traceroute



    Ping is actually pretty helpful. ping -R will show you the path that it takes to return! If it differs from the path going out, then see where it is going. ping






    share|improve this answer



























      3














      The number of hops between point A and point B will introduce latency. Count the number of hops since this is your best indicator.



      A few words of caution. Methods for evaluating the network path are not consistent with how the actual packet will flow. ICMP may be routed and given a difference QoS. Also, traceroute typically looks in one direction, i.e. source to destination. Here are some handy tricks.



      For traceroute, try using -I, -U or -T to see how the path varies. Also look at -t 16 or -t 8. traceroute



      Ping is actually pretty helpful. ping -R will show you the path that it takes to return! If it differs from the path going out, then see where it is going. ping






      share|improve this answer

























        3












        3








        3







        The number of hops between point A and point B will introduce latency. Count the number of hops since this is your best indicator.



        A few words of caution. Methods for evaluating the network path are not consistent with how the actual packet will flow. ICMP may be routed and given a difference QoS. Also, traceroute typically looks in one direction, i.e. source to destination. Here are some handy tricks.



        For traceroute, try using -I, -U or -T to see how the path varies. Also look at -t 16 or -t 8. traceroute



        Ping is actually pretty helpful. ping -R will show you the path that it takes to return! If it differs from the path going out, then see where it is going. ping






        share|improve this answer













        The number of hops between point A and point B will introduce latency. Count the number of hops since this is your best indicator.



        A few words of caution. Methods for evaluating the network path are not consistent with how the actual packet will flow. ICMP may be routed and given a difference QoS. Also, traceroute typically looks in one direction, i.e. source to destination. Here are some handy tricks.



        For traceroute, try using -I, -U or -T to see how the path varies. Also look at -t 16 or -t 8. traceroute



        Ping is actually pretty helpful. ping -R will show you the path that it takes to return! If it differs from the path going out, then see where it is going. ping







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Sep 2 '09 at 21:42









        Noah CampbellNoah Campbell

        3641715




        3641715





















            2














            I think geography will have a lot to do with packet transmitting time since the further you go, the the more hops you will most-likely add affecting overall latency. If your customers are going to be based mostly on the west-coast, then I'd go for the west-coast hosting... Same thing on the east-coast. If your customers will be coming from all over the US, or the world... then you'll just have to make the hard decision as to which side gets the less latency.



            In our case, we're on our own Network (one big intranet), and are able to let our routers make decisions based on OSPF throughout the state :) Unfortunately, anything off our network relies primarily on our ISPs layout.






            share|improve this answer























            • A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: serverfault.com/questions/21048/…

              – l0c0b0x
              Sep 2 '09 at 21:47















            2














            I think geography will have a lot to do with packet transmitting time since the further you go, the the more hops you will most-likely add affecting overall latency. If your customers are going to be based mostly on the west-coast, then I'd go for the west-coast hosting... Same thing on the east-coast. If your customers will be coming from all over the US, or the world... then you'll just have to make the hard decision as to which side gets the less latency.



            In our case, we're on our own Network (one big intranet), and are able to let our routers make decisions based on OSPF throughout the state :) Unfortunately, anything off our network relies primarily on our ISPs layout.






            share|improve this answer























            • A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: serverfault.com/questions/21048/…

              – l0c0b0x
              Sep 2 '09 at 21:47













            2












            2








            2







            I think geography will have a lot to do with packet transmitting time since the further you go, the the more hops you will most-likely add affecting overall latency. If your customers are going to be based mostly on the west-coast, then I'd go for the west-coast hosting... Same thing on the east-coast. If your customers will be coming from all over the US, or the world... then you'll just have to make the hard decision as to which side gets the less latency.



            In our case, we're on our own Network (one big intranet), and are able to let our routers make decisions based on OSPF throughout the state :) Unfortunately, anything off our network relies primarily on our ISPs layout.






            share|improve this answer













            I think geography will have a lot to do with packet transmitting time since the further you go, the the more hops you will most-likely add affecting overall latency. If your customers are going to be based mostly on the west-coast, then I'd go for the west-coast hosting... Same thing on the east-coast. If your customers will be coming from all over the US, or the world... then you'll just have to make the hard decision as to which side gets the less latency.



            In our case, we're on our own Network (one big intranet), and are able to let our routers make decisions based on OSPF throughout the state :) Unfortunately, anything off our network relies primarily on our ISPs layout.







            share|improve this answer












            share|improve this answer



            share|improve this answer










            answered Sep 2 '09 at 21:33









            l0c0b0xl0c0b0x

            9,91443773




            9,91443773












            • A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: serverfault.com/questions/21048/…

              – l0c0b0x
              Sep 2 '09 at 21:47

















            • A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: serverfault.com/questions/21048/…

              – l0c0b0x
              Sep 2 '09 at 21:47
















            A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: serverfault.com/questions/21048/…

            – l0c0b0x
            Sep 2 '09 at 21:47





            A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: serverfault.com/questions/21048/…

            – l0c0b0x
            Sep 2 '09 at 21:47

















            draft saved

            draft discarded
















































            Thanks for contributing an answer to Server Fault!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f61719%2fhow-does-geography-affect-network-latency%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Wikipedia:Vital articles Мазмуну Biography - Өмүр баян Philosophy and psychology - Философия жана психология Religion - Дин Social sciences - Коомдук илимдер Language and literature - Тил жана адабият Science - Илим Technology - Технология Arts and recreation - Искусство жана эс алуу History and geography - Тарых жана география Навигация менюсу

            Bruxelas-Capital Índice Historia | Composición | Situación lingüística | Clima | Cidades irmandadas | Notas | Véxase tamén | Menú de navegacióneO uso das linguas en Bruxelas e a situación do neerlandés"Rexión de Bruxelas Capital"o orixinalSitio da rexiónPáxina de Bruselas no sitio da Oficina de Promoción Turística de Valonia e BruxelasMapa Interactivo da Rexión de Bruxelas-CapitaleeWorldCat332144929079854441105155190212ID28008674080552-90000 0001 0666 3698n94104302ID540940339365017018237

            What should I write in an apology letter, since I have decided not to join a company after accepting an offer letterShould I keep looking after accepting a job offer?What should I do when I've been verbally told I would get an offer letter, but still haven't gotten one after 4 weeks?Do I accept an offer from a company that I am not likely to join?New job hasn't confirmed starting date and I want to give current employer as much notice as possibleHow should I address my manager in my resignation letter?HR delayed background verification, now jobless as resignedNo email communication after accepting a formal written offer. How should I phrase the call?What should I do if after receiving a verbal offer letter I am informed that my written job offer is put on hold due to some internal issues?Should I inform the current employer that I am about to resign within 1-2 weeks since I have signed the offer letter and waiting for visa?What company will do, if I send their offer letter to another company