ConfigMgr - Really really slow PXE boot between Hyper-V machinesWindows Deployment Server and Re-Re-Imaging via PXE BootCan I PXE-boot a desktop PC into a Hyper-V VM?Can PXE boot of Hyper-V VMs be disabled?pxe boot uefi ends with pxe-e16: No offer received - ip helper address in placePXE Boot error in virtualBoxSCCM PXE Boot failed. No advertisements found802.1q PXE bootCan PXE Boot in UEFI but not in LegacyGetting Hyper-V PXE boot to “fall through” to the next boot deviceWDS + MDT How to avoid pxe boot loop
My parents are Afghan
Gift for mentor after his thesis defense?
shebang or not shebang
Function annotation with two or more return parameters
Why doesn't a particle exert force on itself?
call() a function within its own context
When does WordPress.org notify sites of new version?
I want to write a blog post building upon someone else's paper, how can I properly cite/credit them?
TikZ/PGF draw algorithm
Why is the blank symbol not considered part of the input alphabet of a Turing machine?
How could a humanoid creature completely form within the span of 24 hours?
LiOH hydrolysis of methyl 2,2-dimethoxyacetate not giving product?
How do I minimise waste on a flight?
How do I give a darkroom course without negs from the attendees?
Convert Numbers To Emoji Math
Employee is self-centered and affects the team negatively
Explaining intravenous drug abuse to a small child
A♭ major 9th chord in Bach is unexpectedly dissonant/jazzy
What detail can Hubble see on Mars?
Does restarting the SQL Services (on the machine) clear the server cache (for things like query plans and statistics)?
Displaying an Estimated Execution Plan generates CXPACKET, PAGELATCH_SH, and LATCH_EX [ACCESS_METHODS_DATASET_PARENT] waits
What is more safe for browsing the web: PC or smartphone?
Is it safe to keep the GPU on 100% utilization for a very long time?
Why did not Iron man upload his complete memory onto a computer?
ConfigMgr - Really really slow PXE boot between Hyper-V machines
Windows Deployment Server and Re-Re-Imaging via PXE BootCan I PXE-boot a desktop PC into a Hyper-V VM?Can PXE boot of Hyper-V VMs be disabled?pxe boot uefi ends with pxe-e16: No offer received - ip helper address in placePXE Boot error in virtualBoxSCCM PXE Boot failed. No advertisements found802.1q PXE bootCan PXE Boot in UEFI but not in LegacyGetting Hyper-V PXE boot to “fall through” to the next boot deviceWDS + MDT How to avoid pxe boot loop
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I have ConfigMgr 2012 R2 CU3 installed on a Hyper-V virtual machine. The virtual machines are hosted on a Server 2012 R2 Hyper-V cluster, and the ConfigMgr server is running on Server 2012 R2 as well.
I'm trying to PXE boot another virtual machine from configuration manager. It works, but the boot process just to get to WinPE loaded took hours. The ConfigMgr and client machines were on different nodes of the cluster - it turns out this is relevant.
Troubleshooting steps so far
I read a variety of articles like this one that say to set the RamDiskTFTPBlockSize registry key to a higher value. Tried several values, didn't seem to make a difference, so I set it back to the default.
To eliminate any network issues, I tried moving the client machine so it was on the same node as ConfigMgr - and it booted up somewhat faster. Reapplied the RamDiskTFTPBlockSize, and it got nice and fast, and booted in about 2 minutes.
So now I'm reasonably sure it's a network issue, but I'm not sure what that issue is.
I have done packet captures from the ConfigMgr machine of a boot from a VM on the same node, and a VM on a different node, and don't see any differences. The TFTP packets get acknowledged the same way, there aren't any noticeable errors, no retried blocks. In both cases packets get fragmented if the TFTPBlockSize is set high.
Update
I tried network booting a physical client machine, and it's slow as well. In resource monitor in the ConfigMgr server, the network traffic sending to the client is about 130Kb/s. When running this test, the RamDISKTFTPBlockSize was set to 8192, and packet captures confirm it's using that block size.
Network Configuration
The networking for virtual machines is setup like this:
- Virtual machines are connected to the virtual switch, and have VLANs configured.
- The ConfigMgr server is using the newer type of network adapter. The client is using the Legacy Network Adapter to support PXE.
- Each node in the cluster has the built in windows NIC teaming setup - two adapters in switch independent mode with dynamic load balancing.for the virtual machines. The Hyper-V virtual switch uses this team.
- Each node has it's adapters plugged in to the same HP V1910-48G switch. All the connections are gigabit.
- On the switch, the ports for virtual machines are setup as VLAN trunks with the appropriate VLANs. There is no LACP or other teaming setup on the switch side.
Any idea what is causing this, and how can I fix it?
pxe-boot wds tftp sccm-2012-r2
add a comment |
I have ConfigMgr 2012 R2 CU3 installed on a Hyper-V virtual machine. The virtual machines are hosted on a Server 2012 R2 Hyper-V cluster, and the ConfigMgr server is running on Server 2012 R2 as well.
I'm trying to PXE boot another virtual machine from configuration manager. It works, but the boot process just to get to WinPE loaded took hours. The ConfigMgr and client machines were on different nodes of the cluster - it turns out this is relevant.
Troubleshooting steps so far
I read a variety of articles like this one that say to set the RamDiskTFTPBlockSize registry key to a higher value. Tried several values, didn't seem to make a difference, so I set it back to the default.
To eliminate any network issues, I tried moving the client machine so it was on the same node as ConfigMgr - and it booted up somewhat faster. Reapplied the RamDiskTFTPBlockSize, and it got nice and fast, and booted in about 2 minutes.
So now I'm reasonably sure it's a network issue, but I'm not sure what that issue is.
I have done packet captures from the ConfigMgr machine of a boot from a VM on the same node, and a VM on a different node, and don't see any differences. The TFTP packets get acknowledged the same way, there aren't any noticeable errors, no retried blocks. In both cases packets get fragmented if the TFTPBlockSize is set high.
Update
I tried network booting a physical client machine, and it's slow as well. In resource monitor in the ConfigMgr server, the network traffic sending to the client is about 130Kb/s. When running this test, the RamDISKTFTPBlockSize was set to 8192, and packet captures confirm it's using that block size.
Network Configuration
The networking for virtual machines is setup like this:
- Virtual machines are connected to the virtual switch, and have VLANs configured.
- The ConfigMgr server is using the newer type of network adapter. The client is using the Legacy Network Adapter to support PXE.
- Each node in the cluster has the built in windows NIC teaming setup - two adapters in switch independent mode with dynamic load balancing.for the virtual machines. The Hyper-V virtual switch uses this team.
- Each node has it's adapters plugged in to the same HP V1910-48G switch. All the connections are gigabit.
- On the switch, the ports for virtual machines are setup as VLAN trunks with the appropriate VLANs. There is no LACP or other teaming setup on the switch side.
Any idea what is causing this, and how can I fix it?
pxe-boot wds tftp sccm-2012-r2
Are you using dynamic memory for either of your VMs? I've seen some odd behavior when using dynamic memory on hyper-V OSD targets and ConfigMgr server.
– alx9r
Dec 2 '14 at 4:11
I see that you're using VLANs. What device is doing inter-VLAN routing? We currently have a similar open issue with a SonicWall device. Perhaps your router-firewall-IPS is not getting along with TFTP.
– Don Zoomik
Jan 11 '15 at 19:42
Dynamic memory off makes no difference. Client and server are on same vlan/subnet so traffic only goes through switches. The problem comes and goes making it hard to diagnose.
– Grant
Jan 11 '15 at 20:49
add a comment |
I have ConfigMgr 2012 R2 CU3 installed on a Hyper-V virtual machine. The virtual machines are hosted on a Server 2012 R2 Hyper-V cluster, and the ConfigMgr server is running on Server 2012 R2 as well.
I'm trying to PXE boot another virtual machine from configuration manager. It works, but the boot process just to get to WinPE loaded took hours. The ConfigMgr and client machines were on different nodes of the cluster - it turns out this is relevant.
Troubleshooting steps so far
I read a variety of articles like this one that say to set the RamDiskTFTPBlockSize registry key to a higher value. Tried several values, didn't seem to make a difference, so I set it back to the default.
To eliminate any network issues, I tried moving the client machine so it was on the same node as ConfigMgr - and it booted up somewhat faster. Reapplied the RamDiskTFTPBlockSize, and it got nice and fast, and booted in about 2 minutes.
So now I'm reasonably sure it's a network issue, but I'm not sure what that issue is.
I have done packet captures from the ConfigMgr machine of a boot from a VM on the same node, and a VM on a different node, and don't see any differences. The TFTP packets get acknowledged the same way, there aren't any noticeable errors, no retried blocks. In both cases packets get fragmented if the TFTPBlockSize is set high.
Update
I tried network booting a physical client machine, and it's slow as well. In resource monitor in the ConfigMgr server, the network traffic sending to the client is about 130Kb/s. When running this test, the RamDISKTFTPBlockSize was set to 8192, and packet captures confirm it's using that block size.
Network Configuration
The networking for virtual machines is setup like this:
- Virtual machines are connected to the virtual switch, and have VLANs configured.
- The ConfigMgr server is using the newer type of network adapter. The client is using the Legacy Network Adapter to support PXE.
- Each node in the cluster has the built in windows NIC teaming setup - two adapters in switch independent mode with dynamic load balancing.for the virtual machines. The Hyper-V virtual switch uses this team.
- Each node has it's adapters plugged in to the same HP V1910-48G switch. All the connections are gigabit.
- On the switch, the ports for virtual machines are setup as VLAN trunks with the appropriate VLANs. There is no LACP or other teaming setup on the switch side.
Any idea what is causing this, and how can I fix it?
pxe-boot wds tftp sccm-2012-r2
I have ConfigMgr 2012 R2 CU3 installed on a Hyper-V virtual machine. The virtual machines are hosted on a Server 2012 R2 Hyper-V cluster, and the ConfigMgr server is running on Server 2012 R2 as well.
I'm trying to PXE boot another virtual machine from configuration manager. It works, but the boot process just to get to WinPE loaded took hours. The ConfigMgr and client machines were on different nodes of the cluster - it turns out this is relevant.
Troubleshooting steps so far
I read a variety of articles like this one that say to set the RamDiskTFTPBlockSize registry key to a higher value. Tried several values, didn't seem to make a difference, so I set it back to the default.
To eliminate any network issues, I tried moving the client machine so it was on the same node as ConfigMgr - and it booted up somewhat faster. Reapplied the RamDiskTFTPBlockSize, and it got nice and fast, and booted in about 2 minutes.
So now I'm reasonably sure it's a network issue, but I'm not sure what that issue is.
I have done packet captures from the ConfigMgr machine of a boot from a VM on the same node, and a VM on a different node, and don't see any differences. The TFTP packets get acknowledged the same way, there aren't any noticeable errors, no retried blocks. In both cases packets get fragmented if the TFTPBlockSize is set high.
Update
I tried network booting a physical client machine, and it's slow as well. In resource monitor in the ConfigMgr server, the network traffic sending to the client is about 130Kb/s. When running this test, the RamDISKTFTPBlockSize was set to 8192, and packet captures confirm it's using that block size.
Network Configuration
The networking for virtual machines is setup like this:
- Virtual machines are connected to the virtual switch, and have VLANs configured.
- The ConfigMgr server is using the newer type of network adapter. The client is using the Legacy Network Adapter to support PXE.
- Each node in the cluster has the built in windows NIC teaming setup - two adapters in switch independent mode with dynamic load balancing.for the virtual machines. The Hyper-V virtual switch uses this team.
- Each node has it's adapters plugged in to the same HP V1910-48G switch. All the connections are gigabit.
- On the switch, the ports for virtual machines are setup as VLAN trunks with the appropriate VLANs. There is no LACP or other teaming setup on the switch side.
Any idea what is causing this, and how can I fix it?
pxe-boot wds tftp sccm-2012-r2
pxe-boot wds tftp sccm-2012-r2
edited Oct 16 '14 at 15:58
Grant
asked Oct 16 '14 at 14:47
GrantGrant
14.1k105090
14.1k105090
Are you using dynamic memory for either of your VMs? I've seen some odd behavior when using dynamic memory on hyper-V OSD targets and ConfigMgr server.
– alx9r
Dec 2 '14 at 4:11
I see that you're using VLANs. What device is doing inter-VLAN routing? We currently have a similar open issue with a SonicWall device. Perhaps your router-firewall-IPS is not getting along with TFTP.
– Don Zoomik
Jan 11 '15 at 19:42
Dynamic memory off makes no difference. Client and server are on same vlan/subnet so traffic only goes through switches. The problem comes and goes making it hard to diagnose.
– Grant
Jan 11 '15 at 20:49
add a comment |
Are you using dynamic memory for either of your VMs? I've seen some odd behavior when using dynamic memory on hyper-V OSD targets and ConfigMgr server.
– alx9r
Dec 2 '14 at 4:11
I see that you're using VLANs. What device is doing inter-VLAN routing? We currently have a similar open issue with a SonicWall device. Perhaps your router-firewall-IPS is not getting along with TFTP.
– Don Zoomik
Jan 11 '15 at 19:42
Dynamic memory off makes no difference. Client and server are on same vlan/subnet so traffic only goes through switches. The problem comes and goes making it hard to diagnose.
– Grant
Jan 11 '15 at 20:49
Are you using dynamic memory for either of your VMs? I've seen some odd behavior when using dynamic memory on hyper-V OSD targets and ConfigMgr server.
– alx9r
Dec 2 '14 at 4:11
Are you using dynamic memory for either of your VMs? I've seen some odd behavior when using dynamic memory on hyper-V OSD targets and ConfigMgr server.
– alx9r
Dec 2 '14 at 4:11
I see that you're using VLANs. What device is doing inter-VLAN routing? We currently have a similar open issue with a SonicWall device. Perhaps your router-firewall-IPS is not getting along with TFTP.
– Don Zoomik
Jan 11 '15 at 19:42
I see that you're using VLANs. What device is doing inter-VLAN routing? We currently have a similar open issue with a SonicWall device. Perhaps your router-firewall-IPS is not getting along with TFTP.
– Don Zoomik
Jan 11 '15 at 19:42
Dynamic memory off makes no difference. Client and server are on same vlan/subnet so traffic only goes through switches. The problem comes and goes making it hard to diagnose.
– Grant
Jan 11 '15 at 20:49
Dynamic memory off makes no difference. Client and server are on same vlan/subnet so traffic only goes through switches. The problem comes and goes making it hard to diagnose.
– Grant
Jan 11 '15 at 20:49
add a comment |
1 Answer
1
active
oldest
votes
Hyper-VExtensibleVirtualSwitch
I had the same error, and I did the same things as you are doing. Then, I found that my internet connection also was very slow.
On the network connections, choose the physical network card, click on the properties, then click on Hyper-V Extensible Virtual Switch, configure, click on advanced, and click on Virtuel Machine Queues, in the value click on disabled.
That's it now you will have full speed on the PXE boot. I went from 20 min to under 1.
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f637541%2fconfigmgr-really-really-slow-pxe-boot-between-hyper-v-machines%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Hyper-VExtensibleVirtualSwitch
I had the same error, and I did the same things as you are doing. Then, I found that my internet connection also was very slow.
On the network connections, choose the physical network card, click on the properties, then click on Hyper-V Extensible Virtual Switch, configure, click on advanced, and click on Virtuel Machine Queues, in the value click on disabled.
That's it now you will have full speed on the PXE boot. I went from 20 min to under 1.
add a comment |
Hyper-VExtensibleVirtualSwitch
I had the same error, and I did the same things as you are doing. Then, I found that my internet connection also was very slow.
On the network connections, choose the physical network card, click on the properties, then click on Hyper-V Extensible Virtual Switch, configure, click on advanced, and click on Virtuel Machine Queues, in the value click on disabled.
That's it now you will have full speed on the PXE boot. I went from 20 min to under 1.
add a comment |
Hyper-VExtensibleVirtualSwitch
I had the same error, and I did the same things as you are doing. Then, I found that my internet connection also was very slow.
On the network connections, choose the physical network card, click on the properties, then click on Hyper-V Extensible Virtual Switch, configure, click on advanced, and click on Virtuel Machine Queues, in the value click on disabled.
That's it now you will have full speed on the PXE boot. I went from 20 min to under 1.
Hyper-VExtensibleVirtualSwitch
I had the same error, and I did the same things as you are doing. Then, I found that my internet connection also was very slow.
On the network connections, choose the physical network card, click on the properties, then click on Hyper-V Extensible Virtual Switch, configure, click on advanced, and click on Virtuel Machine Queues, in the value click on disabled.
That's it now you will have full speed on the PXE boot. I went from 20 min to under 1.
edited Jul 10 '17 at 7:24
Matthew Wetmore
1,523720
1,523720
answered Jul 7 '17 at 7:28
Jan RasmussenJan Rasmussen
1
1
add a comment |
add a comment |
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f637541%2fconfigmgr-really-really-slow-pxe-boot-between-hyper-v-machines%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Are you using dynamic memory for either of your VMs? I've seen some odd behavior when using dynamic memory on hyper-V OSD targets and ConfigMgr server.
– alx9r
Dec 2 '14 at 4:11
I see that you're using VLANs. What device is doing inter-VLAN routing? We currently have a similar open issue with a SonicWall device. Perhaps your router-firewall-IPS is not getting along with TFTP.
– Don Zoomik
Jan 11 '15 at 19:42
Dynamic memory off makes no difference. Client and server are on same vlan/subnet so traffic only goes through switches. The problem comes and goes making it hard to diagnose.
– Grant
Jan 11 '15 at 20:49