Very low disk throughput on AWS EBS volumesAutoscaling EBS volumesDatabase modularity with EBS volumesNeed to rebuild array from EBS volumes on AWSAWS EBS snapshots - encrypt snapshots of unencrypted volumes?AWS Encrypted EBS Boot Volumes for Windows InstancesAWS Multiple EBS VolumesMount command hanging after attaching EBS volumes on AWSare EBS volumes wiped after use?VERY high write latencies on AWS EBSAWS EC2 - EBS Redundancy
My perfect evil overlord plan... or is it?
"Estrontium" on poster
How long can fsck take on a 30 TB volume?
Has there been evidence of any other gods?
How did Captain Marvel know where to find these characters?
Can I bring back Planetary Romance as a genre?
Employee is self-centered and affects the team negatively
What can cause an unfrozen indoor copper drain pipe to crack?
Not taking the bishop with the knight, why?
Is there an idiom that means "revealing a secret unintentionally"?
What's the difference between "ricochet" and "bounce"?
Are there vaccine ingredients which may not be disclosed ("hidden", "trade secret", or similar)?
Row vectors and column vectors (Mathematica vs Matlab)
Probability of taking balls without replacement from a bag question
How likely are Coriolis-effect-based quirks to develop in starship crew members?
Best species to breed to intelligence
Hexagonal Grid Filling
How does weapons training transfer to empty hand?
Renting a house to a graduate student in my department
Why use steam instead of just hot air?
What does the "DS" in "DS-..." US visa application forms stand for?
Is there a need for better software for writers?
Gift for mentor after his thesis defense?
Which spells are in some way related to shadows or the Shadowfell?
Very low disk throughput on AWS EBS volumes
Autoscaling EBS volumesDatabase modularity with EBS volumesNeed to rebuild array from EBS volumes on AWSAWS EBS snapshots - encrypt snapshots of unencrypted volumes?AWS Encrypted EBS Boot Volumes for Windows InstancesAWS Multiple EBS VolumesMount command hanging after attaching EBS volumes on AWSare EBS volumes wiped after use?VERY high write latencies on AWS EBSAWS EC2 - EBS Redundancy
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I am currently copying data manually from one EBS volume to another, which has a smaller size, as we have XFS file system, which can not be reduced.
I am using a t3.micro instance (EBS optimised) with Amazon Linux 2 AMI, which has both EBS volumes attached (gp2) additionally to the main from the instance (everything in the same Availability Zone)
I have already done this and it was taking around 5-10 mins to copy 95GB of data (which would be if 10 mins, 162MB/s of throughput), but now, with the same volumes, it is being very slow.
The copying process is:
tar cSf - /mnt/nvme1n1p1/ | cat | (cd ../nvme2n1p1/ && tar xSBf -)
I have it running in background, and checking at the same time with iostat -xm 5 3
I am getting this results:
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.02 0.86 39.62 0.05 59.39
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme1n1 0.00 0.00 54.20 0.00 6.70 0.00 253.19 0.94 34.62 34.62 3.56 17.32 93.90
nvme2n1 0.00 0.28 0.06 27.20 0.00 6.71 503.98 0.14 6.67 0.31 6.68 1.22 3.32
nvme0n1 0.00 0.02 2.10 0.90 0.04 0.00 30.65 0.00 0.63 0.63 0.62 0.08 0.02
avg-cpu: %user %nice %system %iowait %steal %idle
0.10 0.00 0.70 37.54 0.00 61.66
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme1n1 0.00 0.00 46.40 0.00 5.80 0.00 256.00 1.00 43.16 43.16 0.00 21.48 99.68
nvme2n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.90 38.66 0.10 60.34
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme1n1 0.00 0.00 53.80 0.00 6.73 0.00 256.00 1.00 36.67 36.67 0.00 18.57 99.92
nvme2n1 0.00 0.00 0.00 16.00 0.00 4.00 512.00 0.03 3.20 0.00 3.20 0.80 1.28
nvme0n1 0.00 0.60 0.00 1.40 0.00 0.02 23.14 0.00 0.00 0.00 0.00 0.00 0.00
As you can see I am getting a throughput below 10MB/s, and it is going less and less. I have been reading about EBS throughput and I do not find any clue about what can it be, if there is any penality or something similar...
Do you know what it can be?
Thanks in advance! :)
More requested info:
ulimit -a
:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 3700
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 3700
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
df -h
:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 463M 0 463M 0% /dev
tmpfs 480M 0 480M 0% /dev/shm
tmpfs 480M 380K 480M 1% /run
tmpfs 480M 0 480M 0% /sys/fs/cgroup
/dev/nvme0n1p1 8.0G 1.1G 7.0G 13% /
tmpfs 96M 0 96M 0% /run/user/1000
/dev/nvme1n1p1 500G 93G 408G 19% /mnt/nvme1n1p1
/dev/nvme2n1p1 150G 55G 96G 37% /mnt/nvme2n1p1
EBS Burst Balance is +98% all the time.
EDIT: it has stopped happening in a new time I have done it
amazon-web-services amazon-ebs
add a comment |
I am currently copying data manually from one EBS volume to another, which has a smaller size, as we have XFS file system, which can not be reduced.
I am using a t3.micro instance (EBS optimised) with Amazon Linux 2 AMI, which has both EBS volumes attached (gp2) additionally to the main from the instance (everything in the same Availability Zone)
I have already done this and it was taking around 5-10 mins to copy 95GB of data (which would be if 10 mins, 162MB/s of throughput), but now, with the same volumes, it is being very slow.
The copying process is:
tar cSf - /mnt/nvme1n1p1/ | cat | (cd ../nvme2n1p1/ && tar xSBf -)
I have it running in background, and checking at the same time with iostat -xm 5 3
I am getting this results:
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.02 0.86 39.62 0.05 59.39
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme1n1 0.00 0.00 54.20 0.00 6.70 0.00 253.19 0.94 34.62 34.62 3.56 17.32 93.90
nvme2n1 0.00 0.28 0.06 27.20 0.00 6.71 503.98 0.14 6.67 0.31 6.68 1.22 3.32
nvme0n1 0.00 0.02 2.10 0.90 0.04 0.00 30.65 0.00 0.63 0.63 0.62 0.08 0.02
avg-cpu: %user %nice %system %iowait %steal %idle
0.10 0.00 0.70 37.54 0.00 61.66
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme1n1 0.00 0.00 46.40 0.00 5.80 0.00 256.00 1.00 43.16 43.16 0.00 21.48 99.68
nvme2n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.90 38.66 0.10 60.34
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme1n1 0.00 0.00 53.80 0.00 6.73 0.00 256.00 1.00 36.67 36.67 0.00 18.57 99.92
nvme2n1 0.00 0.00 0.00 16.00 0.00 4.00 512.00 0.03 3.20 0.00 3.20 0.80 1.28
nvme0n1 0.00 0.60 0.00 1.40 0.00 0.02 23.14 0.00 0.00 0.00 0.00 0.00 0.00
As you can see I am getting a throughput below 10MB/s, and it is going less and less. I have been reading about EBS throughput and I do not find any clue about what can it be, if there is any penality or something similar...
Do you know what it can be?
Thanks in advance! :)
More requested info:
ulimit -a
:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 3700
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 3700
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
df -h
:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 463M 0 463M 0% /dev
tmpfs 480M 0 480M 0% /dev/shm
tmpfs 480M 380K 480M 1% /run
tmpfs 480M 0 480M 0% /sys/fs/cgroup
/dev/nvme0n1p1 8.0G 1.1G 7.0G 13% /
tmpfs 96M 0 96M 0% /run/user/1000
/dev/nvme1n1p1 500G 93G 408G 19% /mnt/nvme1n1p1
/dev/nvme2n1p1 150G 55G 96G 37% /mnt/nvme2n1p1
EBS Burst Balance is +98% all the time.
EDIT: it has stopped happening in a new time I have done it
amazon-web-services amazon-ebs
Additional information request. Post on pastebin.com or here. Text results of: B) SHOW GLOBAL STATUS; after minimum 4 hours UPTIME C) SHOW GLOBAL VARIABLES; D) complete MySQLTuner report if already available - otherwise skip this request AND Optional very helpful information, if available includes - htop OR top OR mytop for most active apps, ulimit -a for a linux/unix list of limits, iostat -xm 5 3 when system is busy for an idea of IOPS by device and core count, df -h for a linux/unix free space list by device, for server tuning analysis.
– Wilson Hauck
Dec 26 '18 at 14:21
This looks like the typical behavior encountered when reading from a volume that was recently created from an EBS snapshot. Does this describe your situation?
– Michael - sqlbot
Dec 30 '18 at 3:18
@Michael-sqlbot now that you say so, you are probably right. I had to restore the volume from a snapshot because of a mistake, and this time I tried probably was after that! Quite sure! Thanks! :)
– froblesmartin
Dec 31 '18 at 8:22
add a comment |
I am currently copying data manually from one EBS volume to another, which has a smaller size, as we have XFS file system, which can not be reduced.
I am using a t3.micro instance (EBS optimised) with Amazon Linux 2 AMI, which has both EBS volumes attached (gp2) additionally to the main from the instance (everything in the same Availability Zone)
I have already done this and it was taking around 5-10 mins to copy 95GB of data (which would be if 10 mins, 162MB/s of throughput), but now, with the same volumes, it is being very slow.
The copying process is:
tar cSf - /mnt/nvme1n1p1/ | cat | (cd ../nvme2n1p1/ && tar xSBf -)
I have it running in background, and checking at the same time with iostat -xm 5 3
I am getting this results:
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.02 0.86 39.62 0.05 59.39
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme1n1 0.00 0.00 54.20 0.00 6.70 0.00 253.19 0.94 34.62 34.62 3.56 17.32 93.90
nvme2n1 0.00 0.28 0.06 27.20 0.00 6.71 503.98 0.14 6.67 0.31 6.68 1.22 3.32
nvme0n1 0.00 0.02 2.10 0.90 0.04 0.00 30.65 0.00 0.63 0.63 0.62 0.08 0.02
avg-cpu: %user %nice %system %iowait %steal %idle
0.10 0.00 0.70 37.54 0.00 61.66
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme1n1 0.00 0.00 46.40 0.00 5.80 0.00 256.00 1.00 43.16 43.16 0.00 21.48 99.68
nvme2n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.90 38.66 0.10 60.34
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme1n1 0.00 0.00 53.80 0.00 6.73 0.00 256.00 1.00 36.67 36.67 0.00 18.57 99.92
nvme2n1 0.00 0.00 0.00 16.00 0.00 4.00 512.00 0.03 3.20 0.00 3.20 0.80 1.28
nvme0n1 0.00 0.60 0.00 1.40 0.00 0.02 23.14 0.00 0.00 0.00 0.00 0.00 0.00
As you can see I am getting a throughput below 10MB/s, and it is going less and less. I have been reading about EBS throughput and I do not find any clue about what can it be, if there is any penality or something similar...
Do you know what it can be?
Thanks in advance! :)
More requested info:
ulimit -a
:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 3700
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 3700
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
df -h
:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 463M 0 463M 0% /dev
tmpfs 480M 0 480M 0% /dev/shm
tmpfs 480M 380K 480M 1% /run
tmpfs 480M 0 480M 0% /sys/fs/cgroup
/dev/nvme0n1p1 8.0G 1.1G 7.0G 13% /
tmpfs 96M 0 96M 0% /run/user/1000
/dev/nvme1n1p1 500G 93G 408G 19% /mnt/nvme1n1p1
/dev/nvme2n1p1 150G 55G 96G 37% /mnt/nvme2n1p1
EBS Burst Balance is +98% all the time.
EDIT: it has stopped happening in a new time I have done it
amazon-web-services amazon-ebs
I am currently copying data manually from one EBS volume to another, which has a smaller size, as we have XFS file system, which can not be reduced.
I am using a t3.micro instance (EBS optimised) with Amazon Linux 2 AMI, which has both EBS volumes attached (gp2) additionally to the main from the instance (everything in the same Availability Zone)
I have already done this and it was taking around 5-10 mins to copy 95GB of data (which would be if 10 mins, 162MB/s of throughput), but now, with the same volumes, it is being very slow.
The copying process is:
tar cSf - /mnt/nvme1n1p1/ | cat | (cd ../nvme2n1p1/ && tar xSBf -)
I have it running in background, and checking at the same time with iostat -xm 5 3
I am getting this results:
avg-cpu: %user %nice %system %iowait %steal %idle
0.07 0.02 0.86 39.62 0.05 59.39
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme1n1 0.00 0.00 54.20 0.00 6.70 0.00 253.19 0.94 34.62 34.62 3.56 17.32 93.90
nvme2n1 0.00 0.28 0.06 27.20 0.00 6.71 503.98 0.14 6.67 0.31 6.68 1.22 3.32
nvme0n1 0.00 0.02 2.10 0.90 0.04 0.00 30.65 0.00 0.63 0.63 0.62 0.08 0.02
avg-cpu: %user %nice %system %iowait %steal %idle
0.10 0.00 0.70 37.54 0.00 61.66
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme1n1 0.00 0.00 46.40 0.00 5.80 0.00 256.00 1.00 43.16 43.16 0.00 21.48 99.68
nvme2n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme0n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.90 38.66 0.10 60.34
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
nvme1n1 0.00 0.00 53.80 0.00 6.73 0.00 256.00 1.00 36.67 36.67 0.00 18.57 99.92
nvme2n1 0.00 0.00 0.00 16.00 0.00 4.00 512.00 0.03 3.20 0.00 3.20 0.80 1.28
nvme0n1 0.00 0.60 0.00 1.40 0.00 0.02 23.14 0.00 0.00 0.00 0.00 0.00 0.00
As you can see I am getting a throughput below 10MB/s, and it is going less and less. I have been reading about EBS throughput and I do not find any clue about what can it be, if there is any penality or something similar...
Do you know what it can be?
Thanks in advance! :)
More requested info:
ulimit -a
:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 3700
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 3700
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
df -h
:
Filesystem Size Used Avail Use% Mounted on
devtmpfs 463M 0 463M 0% /dev
tmpfs 480M 0 480M 0% /dev/shm
tmpfs 480M 380K 480M 1% /run
tmpfs 480M 0 480M 0% /sys/fs/cgroup
/dev/nvme0n1p1 8.0G 1.1G 7.0G 13% /
tmpfs 96M 0 96M 0% /run/user/1000
/dev/nvme1n1p1 500G 93G 408G 19% /mnt/nvme1n1p1
/dev/nvme2n1p1 150G 55G 96G 37% /mnt/nvme2n1p1
EBS Burst Balance is +98% all the time.
EDIT: it has stopped happening in a new time I have done it
amazon-web-services amazon-ebs
amazon-web-services amazon-ebs
edited Dec 27 '18 at 11:00
froblesmartin
asked Dec 26 '18 at 13:21
froblesmartinfroblesmartin
113
113
Additional information request. Post on pastebin.com or here. Text results of: B) SHOW GLOBAL STATUS; after minimum 4 hours UPTIME C) SHOW GLOBAL VARIABLES; D) complete MySQLTuner report if already available - otherwise skip this request AND Optional very helpful information, if available includes - htop OR top OR mytop for most active apps, ulimit -a for a linux/unix list of limits, iostat -xm 5 3 when system is busy for an idea of IOPS by device and core count, df -h for a linux/unix free space list by device, for server tuning analysis.
– Wilson Hauck
Dec 26 '18 at 14:21
This looks like the typical behavior encountered when reading from a volume that was recently created from an EBS snapshot. Does this describe your situation?
– Michael - sqlbot
Dec 30 '18 at 3:18
@Michael-sqlbot now that you say so, you are probably right. I had to restore the volume from a snapshot because of a mistake, and this time I tried probably was after that! Quite sure! Thanks! :)
– froblesmartin
Dec 31 '18 at 8:22
add a comment |
Additional information request. Post on pastebin.com or here. Text results of: B) SHOW GLOBAL STATUS; after minimum 4 hours UPTIME C) SHOW GLOBAL VARIABLES; D) complete MySQLTuner report if already available - otherwise skip this request AND Optional very helpful information, if available includes - htop OR top OR mytop for most active apps, ulimit -a for a linux/unix list of limits, iostat -xm 5 3 when system is busy for an idea of IOPS by device and core count, df -h for a linux/unix free space list by device, for server tuning analysis.
– Wilson Hauck
Dec 26 '18 at 14:21
This looks like the typical behavior encountered when reading from a volume that was recently created from an EBS snapshot. Does this describe your situation?
– Michael - sqlbot
Dec 30 '18 at 3:18
@Michael-sqlbot now that you say so, you are probably right. I had to restore the volume from a snapshot because of a mistake, and this time I tried probably was after that! Quite sure! Thanks! :)
– froblesmartin
Dec 31 '18 at 8:22
Additional information request. Post on pastebin.com or here. Text results of: B) SHOW GLOBAL STATUS; after minimum 4 hours UPTIME C) SHOW GLOBAL VARIABLES; D) complete MySQLTuner report if already available - otherwise skip this request AND Optional very helpful information, if available includes - htop OR top OR mytop for most active apps, ulimit -a for a linux/unix list of limits, iostat -xm 5 3 when system is busy for an idea of IOPS by device and core count, df -h for a linux/unix free space list by device, for server tuning analysis.
– Wilson Hauck
Dec 26 '18 at 14:21
Additional information request. Post on pastebin.com or here. Text results of: B) SHOW GLOBAL STATUS; after minimum 4 hours UPTIME C) SHOW GLOBAL VARIABLES; D) complete MySQLTuner report if already available - otherwise skip this request AND Optional very helpful information, if available includes - htop OR top OR mytop for most active apps, ulimit -a for a linux/unix list of limits, iostat -xm 5 3 when system is busy for an idea of IOPS by device and core count, df -h for a linux/unix free space list by device, for server tuning analysis.
– Wilson Hauck
Dec 26 '18 at 14:21
This looks like the typical behavior encountered when reading from a volume that was recently created from an EBS snapshot. Does this describe your situation?
– Michael - sqlbot
Dec 30 '18 at 3:18
This looks like the typical behavior encountered when reading from a volume that was recently created from an EBS snapshot. Does this describe your situation?
– Michael - sqlbot
Dec 30 '18 at 3:18
@Michael-sqlbot now that you say so, you are probably right. I had to restore the volume from a snapshot because of a mistake, and this time I tried probably was after that! Quite sure! Thanks! :)
– froblesmartin
Dec 31 '18 at 8:22
@Michael-sqlbot now that you say so, you are probably right. I had to restore the volume from a snapshot because of a mistake, and this time I tried probably was after that! Quite sure! Thanks! :)
– froblesmartin
Dec 31 '18 at 8:22
add a comment |
2 Answers
2
active
oldest
votes
Open Amazon Cloudwatch and review the “CPUCreditBalance” for the instance. Look at the total credits available with a sample rate of every 5 minutes. Are the credits dropping to near 0 at any point?
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances-monitoring-cpu-credits.html
A ‘T’ type AWS instance is a burstable, performance limited type. A t2.micro instance earns only 6 CPU credits per hour. This means your CPU can only run at a sustained 10% usage or it will chew up all of its credits and slow to a crawl.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances-monitoring-cpu-credits.html
Increase the size of your instance type. I would recommend changing to a sufficiently sized ‘C’ type instance until after the copy is done. You can downgrade back to a smaller instance afterwards.
Thanks for replying! CPUCreditBalance has been rising linearly since the beginning. CPU is almos not used at all. It doesn't have to be something about the instance size, RAM usage is less than 60MB, and as I said, with the same instance type, just 2 hours before it was doing it perfectly :(
– froblesmartin
Dec 26 '18 at 15:22
EBS volumes also have their own limits and credits based on size. docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
– Appleoddity
Dec 26 '18 at 15:31
add a comment |
Update 2019
Another possible and more likely answer is the instance throughput limit, which have been measured and documented here. A t2.micro has a baseline of 0.06Gbps, which is about 7.5MB/sec, though it can burst to about 10X that.
EBS Credits
One possibility is you've run out of EBS credits, which is separate and distinct from the t2/t3 CPU balance. Read this AWS article about it.
Add "EBS Burst Balance" for all your volumes to your CloudWatch dashboard. If any are / have been at or near zero that's your answer. If not, keep looking.
Here's part of the documentation I linked to.
Many AWS customers are getting great results with the General Purpose
SSD (gp2) EBS volumes that we launched in mid-2014 (see New SSD-Backed
Elastic Block Storage for more information). If you’re unsure of
which volume type to use for your workload, gp2 volumes are the best
default choice because they offer balanced price/performance for a
wide variety of database, dev and test, and boot volume workloads. One
of the more interesting aspects of this volume type is the burst
feature.
We designed gp2‘s burst feature to suit the I/O patterns of real world
workloads we observed across our customer base. Our data scientists
found that volume I/O is extremely bursty, spiking for short periods,
with plenty of idle time between bursts. This unpredictable and bursty
nature of traffic is why we designed the gp2 burst-bucket to allow
even the smallest of volumes to burst up to 3000 IOPS and to replenish
their burst bucket during idle times or when performing low levels of
I/O. The burst-bucket design allows us to provide consistent and
predictable performance for all gp2 users. In practice, very few gp2
volumes ever completely deplete their burst-bucket, and now customers
can track their usage patterns and adjust accordingly.
We’ve written extensively about performance optimization across
different volume types and the differences between benchmarking and
real-world workloads (see I/O Characteristics for more information).
As I described in my original post, burst credits accumulate at a rate
of 3 per configured GB per second, and each one pays for one read or
one write. Each volume can accumulate up to 5.4 million credits, and
they can be spent at up to 3,000 per second per volume. To get
started, you simply create gp2 volumes of the desired size, launch
your application, and your I/O to the volume will proceed as rapidly
and efficiently as possible.
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f946653%2fvery-low-disk-throughput-on-aws-ebs-volumes%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
Open Amazon Cloudwatch and review the “CPUCreditBalance” for the instance. Look at the total credits available with a sample rate of every 5 minutes. Are the credits dropping to near 0 at any point?
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances-monitoring-cpu-credits.html
A ‘T’ type AWS instance is a burstable, performance limited type. A t2.micro instance earns only 6 CPU credits per hour. This means your CPU can only run at a sustained 10% usage or it will chew up all of its credits and slow to a crawl.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances-monitoring-cpu-credits.html
Increase the size of your instance type. I would recommend changing to a sufficiently sized ‘C’ type instance until after the copy is done. You can downgrade back to a smaller instance afterwards.
Thanks for replying! CPUCreditBalance has been rising linearly since the beginning. CPU is almos not used at all. It doesn't have to be something about the instance size, RAM usage is less than 60MB, and as I said, with the same instance type, just 2 hours before it was doing it perfectly :(
– froblesmartin
Dec 26 '18 at 15:22
EBS volumes also have their own limits and credits based on size. docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
– Appleoddity
Dec 26 '18 at 15:31
add a comment |
Open Amazon Cloudwatch and review the “CPUCreditBalance” for the instance. Look at the total credits available with a sample rate of every 5 minutes. Are the credits dropping to near 0 at any point?
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances-monitoring-cpu-credits.html
A ‘T’ type AWS instance is a burstable, performance limited type. A t2.micro instance earns only 6 CPU credits per hour. This means your CPU can only run at a sustained 10% usage or it will chew up all of its credits and slow to a crawl.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances-monitoring-cpu-credits.html
Increase the size of your instance type. I would recommend changing to a sufficiently sized ‘C’ type instance until after the copy is done. You can downgrade back to a smaller instance afterwards.
Thanks for replying! CPUCreditBalance has been rising linearly since the beginning. CPU is almos not used at all. It doesn't have to be something about the instance size, RAM usage is less than 60MB, and as I said, with the same instance type, just 2 hours before it was doing it perfectly :(
– froblesmartin
Dec 26 '18 at 15:22
EBS volumes also have their own limits and credits based on size. docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
– Appleoddity
Dec 26 '18 at 15:31
add a comment |
Open Amazon Cloudwatch and review the “CPUCreditBalance” for the instance. Look at the total credits available with a sample rate of every 5 minutes. Are the credits dropping to near 0 at any point?
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances-monitoring-cpu-credits.html
A ‘T’ type AWS instance is a burstable, performance limited type. A t2.micro instance earns only 6 CPU credits per hour. This means your CPU can only run at a sustained 10% usage or it will chew up all of its credits and slow to a crawl.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances-monitoring-cpu-credits.html
Increase the size of your instance type. I would recommend changing to a sufficiently sized ‘C’ type instance until after the copy is done. You can downgrade back to a smaller instance afterwards.
Open Amazon Cloudwatch and review the “CPUCreditBalance” for the instance. Look at the total credits available with a sample rate of every 5 minutes. Are the credits dropping to near 0 at any point?
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances-monitoring-cpu-credits.html
A ‘T’ type AWS instance is a burstable, performance limited type. A t2.micro instance earns only 6 CPU credits per hour. This means your CPU can only run at a sustained 10% usage or it will chew up all of its credits and slow to a crawl.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances-monitoring-cpu-credits.html
Increase the size of your instance type. I would recommend changing to a sufficiently sized ‘C’ type instance until after the copy is done. You can downgrade back to a smaller instance afterwards.
answered Dec 26 '18 at 15:02
AppleoddityAppleoddity
2,2041317
2,2041317
Thanks for replying! CPUCreditBalance has been rising linearly since the beginning. CPU is almos not used at all. It doesn't have to be something about the instance size, RAM usage is less than 60MB, and as I said, with the same instance type, just 2 hours before it was doing it perfectly :(
– froblesmartin
Dec 26 '18 at 15:22
EBS volumes also have their own limits and credits based on size. docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
– Appleoddity
Dec 26 '18 at 15:31
add a comment |
Thanks for replying! CPUCreditBalance has been rising linearly since the beginning. CPU is almos not used at all. It doesn't have to be something about the instance size, RAM usage is less than 60MB, and as I said, with the same instance type, just 2 hours before it was doing it perfectly :(
– froblesmartin
Dec 26 '18 at 15:22
EBS volumes also have their own limits and credits based on size. docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
– Appleoddity
Dec 26 '18 at 15:31
Thanks for replying! CPUCreditBalance has been rising linearly since the beginning. CPU is almos not used at all. It doesn't have to be something about the instance size, RAM usage is less than 60MB, and as I said, with the same instance type, just 2 hours before it was doing it perfectly :(
– froblesmartin
Dec 26 '18 at 15:22
Thanks for replying! CPUCreditBalance has been rising linearly since the beginning. CPU is almos not used at all. It doesn't have to be something about the instance size, RAM usage is less than 60MB, and as I said, with the same instance type, just 2 hours before it was doing it perfectly :(
– froblesmartin
Dec 26 '18 at 15:22
EBS volumes also have their own limits and credits based on size. docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
– Appleoddity
Dec 26 '18 at 15:31
EBS volumes also have their own limits and credits based on size. docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
– Appleoddity
Dec 26 '18 at 15:31
add a comment |
Update 2019
Another possible and more likely answer is the instance throughput limit, which have been measured and documented here. A t2.micro has a baseline of 0.06Gbps, which is about 7.5MB/sec, though it can burst to about 10X that.
EBS Credits
One possibility is you've run out of EBS credits, which is separate and distinct from the t2/t3 CPU balance. Read this AWS article about it.
Add "EBS Burst Balance" for all your volumes to your CloudWatch dashboard. If any are / have been at or near zero that's your answer. If not, keep looking.
Here's part of the documentation I linked to.
Many AWS customers are getting great results with the General Purpose
SSD (gp2) EBS volumes that we launched in mid-2014 (see New SSD-Backed
Elastic Block Storage for more information). If you’re unsure of
which volume type to use for your workload, gp2 volumes are the best
default choice because they offer balanced price/performance for a
wide variety of database, dev and test, and boot volume workloads. One
of the more interesting aspects of this volume type is the burst
feature.
We designed gp2‘s burst feature to suit the I/O patterns of real world
workloads we observed across our customer base. Our data scientists
found that volume I/O is extremely bursty, spiking for short periods,
with plenty of idle time between bursts. This unpredictable and bursty
nature of traffic is why we designed the gp2 burst-bucket to allow
even the smallest of volumes to burst up to 3000 IOPS and to replenish
their burst bucket during idle times or when performing low levels of
I/O. The burst-bucket design allows us to provide consistent and
predictable performance for all gp2 users. In practice, very few gp2
volumes ever completely deplete their burst-bucket, and now customers
can track their usage patterns and adjust accordingly.
We’ve written extensively about performance optimization across
different volume types and the differences between benchmarking and
real-world workloads (see I/O Characteristics for more information).
As I described in my original post, burst credits accumulate at a rate
of 3 per configured GB per second, and each one pays for one read or
one write. Each volume can accumulate up to 5.4 million credits, and
they can be spent at up to 3,000 per second per volume. To get
started, you simply create gp2 volumes of the desired size, launch
your application, and your I/O to the volume will proceed as rapidly
and efficiently as possible.
add a comment |
Update 2019
Another possible and more likely answer is the instance throughput limit, which have been measured and documented here. A t2.micro has a baseline of 0.06Gbps, which is about 7.5MB/sec, though it can burst to about 10X that.
EBS Credits
One possibility is you've run out of EBS credits, which is separate and distinct from the t2/t3 CPU balance. Read this AWS article about it.
Add "EBS Burst Balance" for all your volumes to your CloudWatch dashboard. If any are / have been at or near zero that's your answer. If not, keep looking.
Here's part of the documentation I linked to.
Many AWS customers are getting great results with the General Purpose
SSD (gp2) EBS volumes that we launched in mid-2014 (see New SSD-Backed
Elastic Block Storage for more information). If you’re unsure of
which volume type to use for your workload, gp2 volumes are the best
default choice because they offer balanced price/performance for a
wide variety of database, dev and test, and boot volume workloads. One
of the more interesting aspects of this volume type is the burst
feature.
We designed gp2‘s burst feature to suit the I/O patterns of real world
workloads we observed across our customer base. Our data scientists
found that volume I/O is extremely bursty, spiking for short periods,
with plenty of idle time between bursts. This unpredictable and bursty
nature of traffic is why we designed the gp2 burst-bucket to allow
even the smallest of volumes to burst up to 3000 IOPS and to replenish
their burst bucket during idle times or when performing low levels of
I/O. The burst-bucket design allows us to provide consistent and
predictable performance for all gp2 users. In practice, very few gp2
volumes ever completely deplete their burst-bucket, and now customers
can track their usage patterns and adjust accordingly.
We’ve written extensively about performance optimization across
different volume types and the differences between benchmarking and
real-world workloads (see I/O Characteristics for more information).
As I described in my original post, burst credits accumulate at a rate
of 3 per configured GB per second, and each one pays for one read or
one write. Each volume can accumulate up to 5.4 million credits, and
they can be spent at up to 3,000 per second per volume. To get
started, you simply create gp2 volumes of the desired size, launch
your application, and your I/O to the volume will proceed as rapidly
and efficiently as possible.
add a comment |
Update 2019
Another possible and more likely answer is the instance throughput limit, which have been measured and documented here. A t2.micro has a baseline of 0.06Gbps, which is about 7.5MB/sec, though it can burst to about 10X that.
EBS Credits
One possibility is you've run out of EBS credits, which is separate and distinct from the t2/t3 CPU balance. Read this AWS article about it.
Add "EBS Burst Balance" for all your volumes to your CloudWatch dashboard. If any are / have been at or near zero that's your answer. If not, keep looking.
Here's part of the documentation I linked to.
Many AWS customers are getting great results with the General Purpose
SSD (gp2) EBS volumes that we launched in mid-2014 (see New SSD-Backed
Elastic Block Storage for more information). If you’re unsure of
which volume type to use for your workload, gp2 volumes are the best
default choice because they offer balanced price/performance for a
wide variety of database, dev and test, and boot volume workloads. One
of the more interesting aspects of this volume type is the burst
feature.
We designed gp2‘s burst feature to suit the I/O patterns of real world
workloads we observed across our customer base. Our data scientists
found that volume I/O is extremely bursty, spiking for short periods,
with plenty of idle time between bursts. This unpredictable and bursty
nature of traffic is why we designed the gp2 burst-bucket to allow
even the smallest of volumes to burst up to 3000 IOPS and to replenish
their burst bucket during idle times or when performing low levels of
I/O. The burst-bucket design allows us to provide consistent and
predictable performance for all gp2 users. In practice, very few gp2
volumes ever completely deplete their burst-bucket, and now customers
can track their usage patterns and adjust accordingly.
We’ve written extensively about performance optimization across
different volume types and the differences between benchmarking and
real-world workloads (see I/O Characteristics for more information).
As I described in my original post, burst credits accumulate at a rate
of 3 per configured GB per second, and each one pays for one read or
one write. Each volume can accumulate up to 5.4 million credits, and
they can be spent at up to 3,000 per second per volume. To get
started, you simply create gp2 volumes of the desired size, launch
your application, and your I/O to the volume will proceed as rapidly
and efficiently as possible.
Update 2019
Another possible and more likely answer is the instance throughput limit, which have been measured and documented here. A t2.micro has a baseline of 0.06Gbps, which is about 7.5MB/sec, though it can burst to about 10X that.
EBS Credits
One possibility is you've run out of EBS credits, which is separate and distinct from the t2/t3 CPU balance. Read this AWS article about it.
Add "EBS Burst Balance" for all your volumes to your CloudWatch dashboard. If any are / have been at or near zero that's your answer. If not, keep looking.
Here's part of the documentation I linked to.
Many AWS customers are getting great results with the General Purpose
SSD (gp2) EBS volumes that we launched in mid-2014 (see New SSD-Backed
Elastic Block Storage for more information). If you’re unsure of
which volume type to use for your workload, gp2 volumes are the best
default choice because they offer balanced price/performance for a
wide variety of database, dev and test, and boot volume workloads. One
of the more interesting aspects of this volume type is the burst
feature.
We designed gp2‘s burst feature to suit the I/O patterns of real world
workloads we observed across our customer base. Our data scientists
found that volume I/O is extremely bursty, spiking for short periods,
with plenty of idle time between bursts. This unpredictable and bursty
nature of traffic is why we designed the gp2 burst-bucket to allow
even the smallest of volumes to burst up to 3000 IOPS and to replenish
their burst bucket during idle times or when performing low levels of
I/O. The burst-bucket design allows us to provide consistent and
predictable performance for all gp2 users. In practice, very few gp2
volumes ever completely deplete their burst-bucket, and now customers
can track their usage patterns and adjust accordingly.
We’ve written extensively about performance optimization across
different volume types and the differences between benchmarking and
real-world workloads (see I/O Characteristics for more information).
As I described in my original post, burst credits accumulate at a rate
of 3 per configured GB per second, and each one pays for one read or
one write. Each volume can accumulate up to 5.4 million credits, and
they can be spent at up to 3,000 per second per volume. To get
started, you simply create gp2 volumes of the desired size, launch
your application, and your I/O to the volume will proceed as rapidly
and efficiently as possible.
edited Apr 30 at 7:40
answered Dec 27 '18 at 7:37
TimTim
18.4k41950
18.4k41950
add a comment |
add a comment |
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f946653%2fvery-low-disk-throughput-on-aws-ebs-volumes%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Additional information request. Post on pastebin.com or here. Text results of: B) SHOW GLOBAL STATUS; after minimum 4 hours UPTIME C) SHOW GLOBAL VARIABLES; D) complete MySQLTuner report if already available - otherwise skip this request AND Optional very helpful information, if available includes - htop OR top OR mytop for most active apps, ulimit -a for a linux/unix list of limits, iostat -xm 5 3 when system is busy for an idea of IOPS by device and core count, df -h for a linux/unix free space list by device, for server tuning analysis.
– Wilson Hauck
Dec 26 '18 at 14:21
This looks like the typical behavior encountered when reading from a volume that was recently created from an EBS snapshot. Does this describe your situation?
– Michael - sqlbot
Dec 30 '18 at 3:18
@Michael-sqlbot now that you say so, you are probably right. I had to restore the volume from a snapshot because of a mistake, and this time I tried probably was after that! Quite sure! Thanks! :)
– froblesmartin
Dec 31 '18 at 8:22