How does geography affect network latency?Network latency — how long does it take for a packet to travel halfway around the world?How much network latency is “typical” for east - west coast USA?Does routing distance affect performance significantly?How can I control the network latency?How does IPv4 Subnetting Work?Does routing distance affect performance significantly?Is real-time or synchronous replication possible over WAN link?How much network latency is “typical” for east - west coast USA?Network latency — how long does it take for a packet to travel halfway around the world?What are some good resources on scalable network design?How to find the lowest latency midpoint between two geographic locations?Hosted Network LatencyNetwork Latency and Timing as shown by Chrome
How did the USSR manage to innovate in an environment characterized by government censorship and high bureaucracy?
Why is this code 6.5x slower with optimizations enabled?
Is there a familial term for apples and pears?
New order #4: World
Validation accuracy vs Testing accuracy
A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?
declaring a variable twice in IIFE
Download, install and reboot computer at night if needed
What Brexit solution does the DUP want?
I’m planning on buying a laser printer but concerned about the life cycle of toner in the machine
What do you call something that goes against the spirit of the law, but is legal when interpreting the law to the letter?
Copycat chess is back
How does one intimidate enemies without having the capacity for violence?
How is it possible for user's password to be changed after storage was encrypted? (on OS X, Android)
Simulate Bitwise Cyclic Tag
Infinite past with a beginning?
Work Breakdown with Tikz
Is it possible to make sharp wind that can cut stuff from afar?
Example of a relative pronoun
Is Social Media Science Fiction?
How do you conduct xenoanthropology after first contact?
Why has Russell's definition of numbers using equivalence classes been finally abandoned? ( If it has actually been abandoned).
N.B. ligature in Latex
How can bays and straits be determined in a procedurally generated map?
How does geography affect network latency?
Network latency — how long does it take for a packet to travel halfway around the world?How much network latency is “typical” for east - west coast USA?Does routing distance affect performance significantly?How can I control the network latency?How does IPv4 Subnetting Work?Does routing distance affect performance significantly?Is real-time or synchronous replication possible over WAN link?How much network latency is “typical” for east - west coast USA?Network latency — how long does it take for a packet to travel halfway around the world?What are some good resources on scalable network design?How to find the lowest latency midpoint between two geographic locations?Hosted Network LatencyNetwork Latency and Timing as shown by Chrome
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I have the option of hosting our database/web server at a managed hosting company on the East Coast (US) or one on the West Coast (US). Our company is based out of New York City, and both hosting providers give our box a dedicated T1 line.
How much of a performance hit (assuming all other factors are equal) would I be taking in terms of network latency if I went with the one on the west coast as opposed to the one on the east coast? I'm not too sure how geography affects internet speeds when the numbers and distances get really large (T1's and above, and thousands of miles).
Thanks!
networking hosting internet latency
add a comment |
I have the option of hosting our database/web server at a managed hosting company on the East Coast (US) or one on the West Coast (US). Our company is based out of New York City, and both hosting providers give our box a dedicated T1 line.
How much of a performance hit (assuming all other factors are equal) would I be taking in terms of network latency if I went with the one on the west coast as opposed to the one on the east coast? I'm not too sure how geography affects internet speeds when the numbers and distances get really large (T1's and above, and thousands of miles).
Thanks!
networking hosting internet latency
Oh, how I envy your position. We've got a client in the Philippines who are running on ISDN as their only link for all their network traffic. You wanna talk latency, trying 700ms between then and us (in Australia) when there's NO traffic on the line :(
– Mark Henderson♦
Sep 2 '09 at 21:37
add a comment |
I have the option of hosting our database/web server at a managed hosting company on the East Coast (US) or one on the West Coast (US). Our company is based out of New York City, and both hosting providers give our box a dedicated T1 line.
How much of a performance hit (assuming all other factors are equal) would I be taking in terms of network latency if I went with the one on the west coast as opposed to the one on the east coast? I'm not too sure how geography affects internet speeds when the numbers and distances get really large (T1's and above, and thousands of miles).
Thanks!
networking hosting internet latency
I have the option of hosting our database/web server at a managed hosting company on the East Coast (US) or one on the West Coast (US). Our company is based out of New York City, and both hosting providers give our box a dedicated T1 line.
How much of a performance hit (assuming all other factors are equal) would I be taking in terms of network latency if I went with the one on the west coast as opposed to the one on the east coast? I'm not too sure how geography affects internet speeds when the numbers and distances get really large (T1's and above, and thousands of miles).
Thanks!
networking hosting internet latency
networking hosting internet latency
asked Sep 2 '09 at 21:12
neezerneezer
4253727
4253727
Oh, how I envy your position. We've got a client in the Philippines who are running on ISDN as their only link for all their network traffic. You wanna talk latency, trying 700ms between then and us (in Australia) when there's NO traffic on the line :(
– Mark Henderson♦
Sep 2 '09 at 21:37
add a comment |
Oh, how I envy your position. We've got a client in the Philippines who are running on ISDN as their only link for all their network traffic. You wanna talk latency, trying 700ms between then and us (in Australia) when there's NO traffic on the line :(
– Mark Henderson♦
Sep 2 '09 at 21:37
Oh, how I envy your position. We've got a client in the Philippines who are running on ISDN as their only link for all their network traffic. You wanna talk latency, trying 700ms between then and us (in Australia) when there's NO traffic on the line :(
– Mark Henderson♦
Sep 2 '09 at 21:37
Oh, how I envy your position. We've got a client in the Philippines who are running on ISDN as their only link for all their network traffic. You wanna talk latency, trying 700ms between then and us (in Australia) when there's NO traffic on the line :(
– Mark Henderson♦
Sep 2 '09 at 21:37
add a comment |
6 Answers
6
active
oldest
votes
There is a distance delay and all other things being equal (routing efficiency, processing overhead, congestion, etc.) a site on the west coast accessed by a host on the east coast is going to take longer than if that site is on the east coast but we're talking milliseconds here.
Link down.......
– Pacerier
Feb 11 '17 at 10:33
add a comment |
All other things equal, you will have additional 44 milliseconds of latency just because of the speed of light. Give or take 1/20 of a second for each packet roundtrip. Not much for typical web usage. Passable for ssh sessions. Substantial if you access your DB directly with a lot of small consecutive transactions.
I've ignored extra latency caused by additional routers/repeaters, which could be much, much higher. I've assumed the distance 4400 km and speed of light in fiber 200000 km/s.
1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good.
– Ryaner
Sep 2 '09 at 22:08
@Ryaner, What do you mean by ~"Fibre repeaters have zero latency"? How is this possible?
– Pacerier
Feb 11 '17 at 10:19
lightwaveonline.com/articles/print/volume-29/issue-6/feature/… covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers.
– Ryaner
Feb 13 '17 at 11:24
add a comment |
We had a client that we spent a fair bit of time going around and around with related to this. They originally were hosted in New York, and their staff is mostly located in the Boston area. They were moving their servers to our facility located in Denver, about two-thirds of the way across the country.
Once they moved, they started bringing up performance problems from their Comcast links in home offices. They used to have <10ms latency, and it went up to 80-ish ms. They noticed slower performance reaching their sites, but said "maybe we are just going to have to put up with going from blazingly fast to mere mortal speeds." They seemed to realize that there were limitations because of the geography, and that their users on the west coast would be potentially getting better performance.
We went back and forth a few times. After around 6 months, we switched to a different primary upstream ISP, for reasons unrelated to this client (better pricing, more bandwidth, unhappy with the number of maintenance windows on the other provider), and with the new provider we were getting around 45ms average latency for this client. At this point their performance concerns seem to have gone away.
Just to give you some experience about one case where this sort of issue was seen and the numbers related to it.
Try using "mtr" to show information about the latency and packet loss to the different remote ends. Unless you fully understand "slow path" routing, ignore anything but the last hop listed on that output. Van Jacobson says that humans notice latency starting at 400ms, but realize that many connections require multiple back-and-forth exchanges, so a 100ms latency can quickly add up to a second...
From my experience, 250ms latency starts to feel like a noticeably slow connection. 10ms or better feels like a blazing connection. It really depends on what you're doing.
Sean
Denver, meaning CO?
– Pacerier
Feb 11 '17 at 10:22
Btw, "humans notice latency starting at 400ms" is blatantly false. Talk to any gamer and you'd see that a latency of 65ms (and above) is completely unacceptable.
– Pacerier
Feb 11 '17 at 10:28
add a comment |
Well, packets travel down the wire at the close enough to the speed of light that raw transmission time is negligible when compared to other factors. What matters is efficiency of routing and how fast routing devices can do the routing. That unfortunately can't be determined purely based on geographical distance. There is a strong correlation between distance and latency, but there is not hard and fast rule that I am aware of.
Is there anyway to determine the efficiency of the routing? Is there some sort of test I could run on the two servers to see that, and what would that number mean?
– neezer
Sep 2 '09 at 21:29
2
Packets travel at best .66 (optical) and ~.55 (copper) times the speed of light.
– Noah Campbell
Sep 2 '09 at 21:34
@Noah - Is that better?
– EBGreen
Sep 2 '09 at 21:38
Better than the speed of light? No, it's about half the speed of light.
– Noah Campbell
Sep 2 '09 at 21:45
I mean, is my edit a better description.
– EBGreen
Sep 2 '09 at 21:47
|
show 2 more comments
The number of hops between point A and point B will introduce latency. Count the number of hops since this is your best indicator.
A few words of caution. Methods for evaluating the network path are not consistent with how the actual packet will flow. ICMP may be routed and given a difference QoS. Also, traceroute typically looks in one direction, i.e. source to destination. Here are some handy tricks.
For traceroute
, try using -I
, -U
or -T
to see how the path varies. Also look at -t 16
or -t 8
. traceroute
Ping is actually pretty helpful. ping -R
will show you the path that it takes to return! If it differs from the path going out, then see where it is going. ping
add a comment |
I think geography will have a lot to do with packet transmitting time since the further you go, the the more hops you will most-likely add affecting overall latency. If your customers are going to be based mostly on the west-coast, then I'd go for the west-coast hosting... Same thing on the east-coast. If your customers will be coming from all over the US, or the world... then you'll just have to make the hard decision as to which side gets the less latency.
In our case, we're on our own Network (one big intranet), and are able to let our routers make decisions based on OSPF throughout the state :) Unfortunately, anything off our network relies primarily on our ISPs layout.
A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: serverfault.com/questions/21048/…
– l0c0b0x
Sep 2 '09 at 21:47
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f61719%2fhow-does-geography-affect-network-latency%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
There is a distance delay and all other things being equal (routing efficiency, processing overhead, congestion, etc.) a site on the west coast accessed by a host on the east coast is going to take longer than if that site is on the east coast but we're talking milliseconds here.
Link down.......
– Pacerier
Feb 11 '17 at 10:33
add a comment |
There is a distance delay and all other things being equal (routing efficiency, processing overhead, congestion, etc.) a site on the west coast accessed by a host on the east coast is going to take longer than if that site is on the east coast but we're talking milliseconds here.
Link down.......
– Pacerier
Feb 11 '17 at 10:33
add a comment |
There is a distance delay and all other things being equal (routing efficiency, processing overhead, congestion, etc.) a site on the west coast accessed by a host on the east coast is going to take longer than if that site is on the east coast but we're talking milliseconds here.
There is a distance delay and all other things being equal (routing efficiency, processing overhead, congestion, etc.) a site on the west coast accessed by a host on the east coast is going to take longer than if that site is on the east coast but we're talking milliseconds here.
edited Feb 11 '17 at 16:01
answered Sep 2 '09 at 21:30
joeqwertyjoeqwerty
96.6k465149
96.6k465149
Link down.......
– Pacerier
Feb 11 '17 at 10:33
add a comment |
Link down.......
– Pacerier
Feb 11 '17 at 10:33
Link down.......
– Pacerier
Feb 11 '17 at 10:33
Link down.......
– Pacerier
Feb 11 '17 at 10:33
add a comment |
All other things equal, you will have additional 44 milliseconds of latency just because of the speed of light. Give or take 1/20 of a second for each packet roundtrip. Not much for typical web usage. Passable for ssh sessions. Substantial if you access your DB directly with a lot of small consecutive transactions.
I've ignored extra latency caused by additional routers/repeaters, which could be much, much higher. I've assumed the distance 4400 km and speed of light in fiber 200000 km/s.
1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good.
– Ryaner
Sep 2 '09 at 22:08
@Ryaner, What do you mean by ~"Fibre repeaters have zero latency"? How is this possible?
– Pacerier
Feb 11 '17 at 10:19
lightwaveonline.com/articles/print/volume-29/issue-6/feature/… covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers.
– Ryaner
Feb 13 '17 at 11:24
add a comment |
All other things equal, you will have additional 44 milliseconds of latency just because of the speed of light. Give or take 1/20 of a second for each packet roundtrip. Not much for typical web usage. Passable for ssh sessions. Substantial if you access your DB directly with a lot of small consecutive transactions.
I've ignored extra latency caused by additional routers/repeaters, which could be much, much higher. I've assumed the distance 4400 km and speed of light in fiber 200000 km/s.
1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good.
– Ryaner
Sep 2 '09 at 22:08
@Ryaner, What do you mean by ~"Fibre repeaters have zero latency"? How is this possible?
– Pacerier
Feb 11 '17 at 10:19
lightwaveonline.com/articles/print/volume-29/issue-6/feature/… covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers.
– Ryaner
Feb 13 '17 at 11:24
add a comment |
All other things equal, you will have additional 44 milliseconds of latency just because of the speed of light. Give or take 1/20 of a second for each packet roundtrip. Not much for typical web usage. Passable for ssh sessions. Substantial if you access your DB directly with a lot of small consecutive transactions.
I've ignored extra latency caused by additional routers/repeaters, which could be much, much higher. I've assumed the distance 4400 km and speed of light in fiber 200000 km/s.
All other things equal, you will have additional 44 milliseconds of latency just because of the speed of light. Give or take 1/20 of a second for each packet roundtrip. Not much for typical web usage. Passable for ssh sessions. Substantial if you access your DB directly with a lot of small consecutive transactions.
I've ignored extra latency caused by additional routers/repeaters, which could be much, much higher. I've assumed the distance 4400 km and speed of light in fiber 200000 km/s.
answered Sep 2 '09 at 21:52
kubanczykkubanczyk
10.5k22845
10.5k22845
1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good.
– Ryaner
Sep 2 '09 at 22:08
@Ryaner, What do you mean by ~"Fibre repeaters have zero latency"? How is this possible?
– Pacerier
Feb 11 '17 at 10:19
lightwaveonline.com/articles/print/volume-29/issue-6/feature/… covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers.
– Ryaner
Feb 13 '17 at 11:24
add a comment |
1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good.
– Ryaner
Sep 2 '09 at 22:08
@Ryaner, What do you mean by ~"Fibre repeaters have zero latency"? How is this possible?
– Pacerier
Feb 11 '17 at 10:19
lightwaveonline.com/articles/print/volume-29/issue-6/feature/… covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers.
– Ryaner
Feb 13 '17 at 11:24
1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good.
– Ryaner
Sep 2 '09 at 22:08
1ms per 100km is only accurate if there are no routers in between. Fibre repeaters don't add latency once your provide is good.
– Ryaner
Sep 2 '09 at 22:08
@Ryaner, What do you mean by ~"Fibre repeaters have zero latency"? How is this possible?
– Pacerier
Feb 11 '17 at 10:19
@Ryaner, What do you mean by ~"Fibre repeaters have zero latency"? How is this possible?
– Pacerier
Feb 11 '17 at 10:19
lightwaveonline.com/articles/print/volume-29/issue-6/feature/… covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers.
– Ryaner
Feb 13 '17 at 11:24
lightwaveonline.com/articles/print/volume-29/issue-6/feature/… covers the different details very well. TLDR, optical regeneration repeaters do add latency but it is basically zero in real terms. You will see a higher latency added by the DCM on either end and your end point routers.
– Ryaner
Feb 13 '17 at 11:24
add a comment |
We had a client that we spent a fair bit of time going around and around with related to this. They originally were hosted in New York, and their staff is mostly located in the Boston area. They were moving their servers to our facility located in Denver, about two-thirds of the way across the country.
Once they moved, they started bringing up performance problems from their Comcast links in home offices. They used to have <10ms latency, and it went up to 80-ish ms. They noticed slower performance reaching their sites, but said "maybe we are just going to have to put up with going from blazingly fast to mere mortal speeds." They seemed to realize that there were limitations because of the geography, and that their users on the west coast would be potentially getting better performance.
We went back and forth a few times. After around 6 months, we switched to a different primary upstream ISP, for reasons unrelated to this client (better pricing, more bandwidth, unhappy with the number of maintenance windows on the other provider), and with the new provider we were getting around 45ms average latency for this client. At this point their performance concerns seem to have gone away.
Just to give you some experience about one case where this sort of issue was seen and the numbers related to it.
Try using "mtr" to show information about the latency and packet loss to the different remote ends. Unless you fully understand "slow path" routing, ignore anything but the last hop listed on that output. Van Jacobson says that humans notice latency starting at 400ms, but realize that many connections require multiple back-and-forth exchanges, so a 100ms latency can quickly add up to a second...
From my experience, 250ms latency starts to feel like a noticeably slow connection. 10ms or better feels like a blazing connection. It really depends on what you're doing.
Sean
Denver, meaning CO?
– Pacerier
Feb 11 '17 at 10:22
Btw, "humans notice latency starting at 400ms" is blatantly false. Talk to any gamer and you'd see that a latency of 65ms (and above) is completely unacceptable.
– Pacerier
Feb 11 '17 at 10:28
add a comment |
We had a client that we spent a fair bit of time going around and around with related to this. They originally were hosted in New York, and their staff is mostly located in the Boston area. They were moving their servers to our facility located in Denver, about two-thirds of the way across the country.
Once they moved, they started bringing up performance problems from their Comcast links in home offices. They used to have <10ms latency, and it went up to 80-ish ms. They noticed slower performance reaching their sites, but said "maybe we are just going to have to put up with going from blazingly fast to mere mortal speeds." They seemed to realize that there were limitations because of the geography, and that their users on the west coast would be potentially getting better performance.
We went back and forth a few times. After around 6 months, we switched to a different primary upstream ISP, for reasons unrelated to this client (better pricing, more bandwidth, unhappy with the number of maintenance windows on the other provider), and with the new provider we were getting around 45ms average latency for this client. At this point their performance concerns seem to have gone away.
Just to give you some experience about one case where this sort of issue was seen and the numbers related to it.
Try using "mtr" to show information about the latency and packet loss to the different remote ends. Unless you fully understand "slow path" routing, ignore anything but the last hop listed on that output. Van Jacobson says that humans notice latency starting at 400ms, but realize that many connections require multiple back-and-forth exchanges, so a 100ms latency can quickly add up to a second...
From my experience, 250ms latency starts to feel like a noticeably slow connection. 10ms or better feels like a blazing connection. It really depends on what you're doing.
Sean
Denver, meaning CO?
– Pacerier
Feb 11 '17 at 10:22
Btw, "humans notice latency starting at 400ms" is blatantly false. Talk to any gamer and you'd see that a latency of 65ms (and above) is completely unacceptable.
– Pacerier
Feb 11 '17 at 10:28
add a comment |
We had a client that we spent a fair bit of time going around and around with related to this. They originally were hosted in New York, and their staff is mostly located in the Boston area. They were moving their servers to our facility located in Denver, about two-thirds of the way across the country.
Once they moved, they started bringing up performance problems from their Comcast links in home offices. They used to have <10ms latency, and it went up to 80-ish ms. They noticed slower performance reaching their sites, but said "maybe we are just going to have to put up with going from blazingly fast to mere mortal speeds." They seemed to realize that there were limitations because of the geography, and that their users on the west coast would be potentially getting better performance.
We went back and forth a few times. After around 6 months, we switched to a different primary upstream ISP, for reasons unrelated to this client (better pricing, more bandwidth, unhappy with the number of maintenance windows on the other provider), and with the new provider we were getting around 45ms average latency for this client. At this point their performance concerns seem to have gone away.
Just to give you some experience about one case where this sort of issue was seen and the numbers related to it.
Try using "mtr" to show information about the latency and packet loss to the different remote ends. Unless you fully understand "slow path" routing, ignore anything but the last hop listed on that output. Van Jacobson says that humans notice latency starting at 400ms, but realize that many connections require multiple back-and-forth exchanges, so a 100ms latency can quickly add up to a second...
From my experience, 250ms latency starts to feel like a noticeably slow connection. 10ms or better feels like a blazing connection. It really depends on what you're doing.
Sean
We had a client that we spent a fair bit of time going around and around with related to this. They originally were hosted in New York, and their staff is mostly located in the Boston area. They were moving their servers to our facility located in Denver, about two-thirds of the way across the country.
Once they moved, they started bringing up performance problems from their Comcast links in home offices. They used to have <10ms latency, and it went up to 80-ish ms. They noticed slower performance reaching their sites, but said "maybe we are just going to have to put up with going from blazingly fast to mere mortal speeds." They seemed to realize that there were limitations because of the geography, and that their users on the west coast would be potentially getting better performance.
We went back and forth a few times. After around 6 months, we switched to a different primary upstream ISP, for reasons unrelated to this client (better pricing, more bandwidth, unhappy with the number of maintenance windows on the other provider), and with the new provider we were getting around 45ms average latency for this client. At this point their performance concerns seem to have gone away.
Just to give you some experience about one case where this sort of issue was seen and the numbers related to it.
Try using "mtr" to show information about the latency and packet loss to the different remote ends. Unless you fully understand "slow path" routing, ignore anything but the last hop listed on that output. Van Jacobson says that humans notice latency starting at 400ms, but realize that many connections require multiple back-and-forth exchanges, so a 100ms latency can quickly add up to a second...
From my experience, 250ms latency starts to feel like a noticeably slow connection. 10ms or better feels like a blazing connection. It really depends on what you're doing.
Sean
answered Sep 3 '09 at 3:11
Sean ReifschneiderSean Reifschneider
8,55531927
8,55531927
Denver, meaning CO?
– Pacerier
Feb 11 '17 at 10:22
Btw, "humans notice latency starting at 400ms" is blatantly false. Talk to any gamer and you'd see that a latency of 65ms (and above) is completely unacceptable.
– Pacerier
Feb 11 '17 at 10:28
add a comment |
Denver, meaning CO?
– Pacerier
Feb 11 '17 at 10:22
Btw, "humans notice latency starting at 400ms" is blatantly false. Talk to any gamer and you'd see that a latency of 65ms (and above) is completely unacceptable.
– Pacerier
Feb 11 '17 at 10:28
Denver, meaning CO?
– Pacerier
Feb 11 '17 at 10:22
Denver, meaning CO?
– Pacerier
Feb 11 '17 at 10:22
Btw, "humans notice latency starting at 400ms" is blatantly false. Talk to any gamer and you'd see that a latency of 65ms (and above) is completely unacceptable.
– Pacerier
Feb 11 '17 at 10:28
Btw, "humans notice latency starting at 400ms" is blatantly false. Talk to any gamer and you'd see that a latency of 65ms (and above) is completely unacceptable.
– Pacerier
Feb 11 '17 at 10:28
add a comment |
Well, packets travel down the wire at the close enough to the speed of light that raw transmission time is negligible when compared to other factors. What matters is efficiency of routing and how fast routing devices can do the routing. That unfortunately can't be determined purely based on geographical distance. There is a strong correlation between distance and latency, but there is not hard and fast rule that I am aware of.
Is there anyway to determine the efficiency of the routing? Is there some sort of test I could run on the two servers to see that, and what would that number mean?
– neezer
Sep 2 '09 at 21:29
2
Packets travel at best .66 (optical) and ~.55 (copper) times the speed of light.
– Noah Campbell
Sep 2 '09 at 21:34
@Noah - Is that better?
– EBGreen
Sep 2 '09 at 21:38
Better than the speed of light? No, it's about half the speed of light.
– Noah Campbell
Sep 2 '09 at 21:45
I mean, is my edit a better description.
– EBGreen
Sep 2 '09 at 21:47
|
show 2 more comments
Well, packets travel down the wire at the close enough to the speed of light that raw transmission time is negligible when compared to other factors. What matters is efficiency of routing and how fast routing devices can do the routing. That unfortunately can't be determined purely based on geographical distance. There is a strong correlation between distance and latency, but there is not hard and fast rule that I am aware of.
Is there anyway to determine the efficiency of the routing? Is there some sort of test I could run on the two servers to see that, and what would that number mean?
– neezer
Sep 2 '09 at 21:29
2
Packets travel at best .66 (optical) and ~.55 (copper) times the speed of light.
– Noah Campbell
Sep 2 '09 at 21:34
@Noah - Is that better?
– EBGreen
Sep 2 '09 at 21:38
Better than the speed of light? No, it's about half the speed of light.
– Noah Campbell
Sep 2 '09 at 21:45
I mean, is my edit a better description.
– EBGreen
Sep 2 '09 at 21:47
|
show 2 more comments
Well, packets travel down the wire at the close enough to the speed of light that raw transmission time is negligible when compared to other factors. What matters is efficiency of routing and how fast routing devices can do the routing. That unfortunately can't be determined purely based on geographical distance. There is a strong correlation between distance and latency, but there is not hard and fast rule that I am aware of.
Well, packets travel down the wire at the close enough to the speed of light that raw transmission time is negligible when compared to other factors. What matters is efficiency of routing and how fast routing devices can do the routing. That unfortunately can't be determined purely based on geographical distance. There is a strong correlation between distance and latency, but there is not hard and fast rule that I am aware of.
edited Sep 2 '09 at 21:38
answered Sep 2 '09 at 21:19
EBGreenEBGreen
1,313910
1,313910
Is there anyway to determine the efficiency of the routing? Is there some sort of test I could run on the two servers to see that, and what would that number mean?
– neezer
Sep 2 '09 at 21:29
2
Packets travel at best .66 (optical) and ~.55 (copper) times the speed of light.
– Noah Campbell
Sep 2 '09 at 21:34
@Noah - Is that better?
– EBGreen
Sep 2 '09 at 21:38
Better than the speed of light? No, it's about half the speed of light.
– Noah Campbell
Sep 2 '09 at 21:45
I mean, is my edit a better description.
– EBGreen
Sep 2 '09 at 21:47
|
show 2 more comments
Is there anyway to determine the efficiency of the routing? Is there some sort of test I could run on the two servers to see that, and what would that number mean?
– neezer
Sep 2 '09 at 21:29
2
Packets travel at best .66 (optical) and ~.55 (copper) times the speed of light.
– Noah Campbell
Sep 2 '09 at 21:34
@Noah - Is that better?
– EBGreen
Sep 2 '09 at 21:38
Better than the speed of light? No, it's about half the speed of light.
– Noah Campbell
Sep 2 '09 at 21:45
I mean, is my edit a better description.
– EBGreen
Sep 2 '09 at 21:47
Is there anyway to determine the efficiency of the routing? Is there some sort of test I could run on the two servers to see that, and what would that number mean?
– neezer
Sep 2 '09 at 21:29
Is there anyway to determine the efficiency of the routing? Is there some sort of test I could run on the two servers to see that, and what would that number mean?
– neezer
Sep 2 '09 at 21:29
2
2
Packets travel at best .66 (optical) and ~.55 (copper) times the speed of light.
– Noah Campbell
Sep 2 '09 at 21:34
Packets travel at best .66 (optical) and ~.55 (copper) times the speed of light.
– Noah Campbell
Sep 2 '09 at 21:34
@Noah - Is that better?
– EBGreen
Sep 2 '09 at 21:38
@Noah - Is that better?
– EBGreen
Sep 2 '09 at 21:38
Better than the speed of light? No, it's about half the speed of light.
– Noah Campbell
Sep 2 '09 at 21:45
Better than the speed of light? No, it's about half the speed of light.
– Noah Campbell
Sep 2 '09 at 21:45
I mean, is my edit a better description.
– EBGreen
Sep 2 '09 at 21:47
I mean, is my edit a better description.
– EBGreen
Sep 2 '09 at 21:47
|
show 2 more comments
The number of hops between point A and point B will introduce latency. Count the number of hops since this is your best indicator.
A few words of caution. Methods for evaluating the network path are not consistent with how the actual packet will flow. ICMP may be routed and given a difference QoS. Also, traceroute typically looks in one direction, i.e. source to destination. Here are some handy tricks.
For traceroute
, try using -I
, -U
or -T
to see how the path varies. Also look at -t 16
or -t 8
. traceroute
Ping is actually pretty helpful. ping -R
will show you the path that it takes to return! If it differs from the path going out, then see where it is going. ping
add a comment |
The number of hops between point A and point B will introduce latency. Count the number of hops since this is your best indicator.
A few words of caution. Methods for evaluating the network path are not consistent with how the actual packet will flow. ICMP may be routed and given a difference QoS. Also, traceroute typically looks in one direction, i.e. source to destination. Here are some handy tricks.
For traceroute
, try using -I
, -U
or -T
to see how the path varies. Also look at -t 16
or -t 8
. traceroute
Ping is actually pretty helpful. ping -R
will show you the path that it takes to return! If it differs from the path going out, then see where it is going. ping
add a comment |
The number of hops between point A and point B will introduce latency. Count the number of hops since this is your best indicator.
A few words of caution. Methods for evaluating the network path are not consistent with how the actual packet will flow. ICMP may be routed and given a difference QoS. Also, traceroute typically looks in one direction, i.e. source to destination. Here are some handy tricks.
For traceroute
, try using -I
, -U
or -T
to see how the path varies. Also look at -t 16
or -t 8
. traceroute
Ping is actually pretty helpful. ping -R
will show you the path that it takes to return! If it differs from the path going out, then see where it is going. ping
The number of hops between point A and point B will introduce latency. Count the number of hops since this is your best indicator.
A few words of caution. Methods for evaluating the network path are not consistent with how the actual packet will flow. ICMP may be routed and given a difference QoS. Also, traceroute typically looks in one direction, i.e. source to destination. Here are some handy tricks.
For traceroute
, try using -I
, -U
or -T
to see how the path varies. Also look at -t 16
or -t 8
. traceroute
Ping is actually pretty helpful. ping -R
will show you the path that it takes to return! If it differs from the path going out, then see where it is going. ping
answered Sep 2 '09 at 21:42
Noah CampbellNoah Campbell
3641715
3641715
add a comment |
add a comment |
I think geography will have a lot to do with packet transmitting time since the further you go, the the more hops you will most-likely add affecting overall latency. If your customers are going to be based mostly on the west-coast, then I'd go for the west-coast hosting... Same thing on the east-coast. If your customers will be coming from all over the US, or the world... then you'll just have to make the hard decision as to which side gets the less latency.
In our case, we're on our own Network (one big intranet), and are able to let our routers make decisions based on OSPF throughout the state :) Unfortunately, anything off our network relies primarily on our ISPs layout.
A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: serverfault.com/questions/21048/…
– l0c0b0x
Sep 2 '09 at 21:47
add a comment |
I think geography will have a lot to do with packet transmitting time since the further you go, the the more hops you will most-likely add affecting overall latency. If your customers are going to be based mostly on the west-coast, then I'd go for the west-coast hosting... Same thing on the east-coast. If your customers will be coming from all over the US, or the world... then you'll just have to make the hard decision as to which side gets the less latency.
In our case, we're on our own Network (one big intranet), and are able to let our routers make decisions based on OSPF throughout the state :) Unfortunately, anything off our network relies primarily on our ISPs layout.
A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: serverfault.com/questions/21048/…
– l0c0b0x
Sep 2 '09 at 21:47
add a comment |
I think geography will have a lot to do with packet transmitting time since the further you go, the the more hops you will most-likely add affecting overall latency. If your customers are going to be based mostly on the west-coast, then I'd go for the west-coast hosting... Same thing on the east-coast. If your customers will be coming from all over the US, or the world... then you'll just have to make the hard decision as to which side gets the less latency.
In our case, we're on our own Network (one big intranet), and are able to let our routers make decisions based on OSPF throughout the state :) Unfortunately, anything off our network relies primarily on our ISPs layout.
I think geography will have a lot to do with packet transmitting time since the further you go, the the more hops you will most-likely add affecting overall latency. If your customers are going to be based mostly on the west-coast, then I'd go for the west-coast hosting... Same thing on the east-coast. If your customers will be coming from all over the US, or the world... then you'll just have to make the hard decision as to which side gets the less latency.
In our case, we're on our own Network (one big intranet), and are able to let our routers make decisions based on OSPF throughout the state :) Unfortunately, anything off our network relies primarily on our ISPs layout.
answered Sep 2 '09 at 21:33
l0c0b0xl0c0b0x
9,91443773
9,91443773
A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: serverfault.com/questions/21048/…
– l0c0b0x
Sep 2 '09 at 21:47
add a comment |
A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: serverfault.com/questions/21048/…
– l0c0b0x
Sep 2 '09 at 21:47
A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: serverfault.com/questions/21048/…
– l0c0b0x
Sep 2 '09 at 21:47
A great tool you can use is MTR to not just find out the latency, but packet loss, route info, jitter, etc. I made a post about the information it gives here: serverfault.com/questions/21048/…
– l0c0b0x
Sep 2 '09 at 21:47
add a comment |
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f61719%2fhow-does-geography-affect-network-latency%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Oh, how I envy your position. We've got a client in the Philippines who are running on ISDN as their only link for all their network traffic. You wanna talk latency, trying 700ms between then and us (in Australia) when there's NO traffic on the line :(
– Mark Henderson♦
Sep 2 '09 at 21:37