Why new two node cluster not behaving right?Linux HA cluster w/Xen, Heartbeat, Pacemaker. domU does not failover to secondary nodeHow to make active/passive jboss resource in pacemakerHow do I set the pacemaker cluster name with pcs?How to integrate the fence_cisco_ucs.py(python) script with the pacemaker-1.1.10 and corosync-2.3.3Pacemaker error on failback with drbdPacemaker and Stonith : passive node won't bring up resourcesSLES 12 High Availability Cluster - scsi persistant reservation fencingUnable to communicate with pacemaker host while authorisingcorosync systemd resource does not reflect service statusHow to create resource for service using Corosync/Pacemaker
What can cause the front wheel to lock up when going over a small bump?
How bad would a partial hash leak be, realistically?
What does the "c." listed under weapon length mean?
Is this half mask suitable for spray painting?
Can you really not move between grapples/shoves?
Building a road to escape Earth's gravity by making a pyramid on Antartica
Average spam confidence
Why does the Schrödinger equation work so well for the Hydrogen atom despite the relativistic boundary at the nucleus?
Phone number to a lounge, or lounges generally
Strange symbol for two functions
My coworkers think I had a long honeymoon. Actually I was diagnosed with cancer. How do I talk about it?
Implement Homestuck's Catenative Doomsday Dice Cascader
Turing patterns
PL/SQL function to receive a number and return its binary format
Secure offsite backup, even in the case of hacker root access
Why don’t airliners have temporary liveries?
How can drunken, homicidal elves successfully conduct a wild hunt?
What risks are there when you clear your cookies instead of logging off?
Traffic law UK, pedestrians
Smooth switching between 12v batteries, with toggle switch
Why is the application of an oracle function not a measurement?
Do simulator games use a realistic trajectory to get into orbit?
Does an ice chest packed full of frozen food need ice?
Do the English have an ancient (obsolete) verb for the action of the book opening?
Why new two node cluster not behaving right?
Linux HA cluster w/Xen, Heartbeat, Pacemaker. domU does not failover to secondary nodeHow to make active/passive jboss resource in pacemakerHow do I set the pacemaker cluster name with pcs?How to integrate the fence_cisco_ucs.py(python) script with the pacemaker-1.1.10 and corosync-2.3.3Pacemaker error on failback with drbdPacemaker and Stonith : passive node won't bring up resourcesSLES 12 High Availability Cluster - scsi persistant reservation fencingUnable to communicate with pacemaker host while authorisingcorosync systemd resource does not reflect service statusHow to create resource for service using Corosync/Pacemaker
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I’m trying to create cluster of two nodes, but it seems to behave a little strange following this guide, but I was unable to do, for example:
[root@afnA ~]# pcs property set stonith-enabled=false
Error: Unable to update cib
Call cib_replace failed (-62): Timer expired
only thing I find in logs are continued corosync
events:
Nov 06 01:30:54 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:30:56 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:30:57 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:30:59 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:31:01 corosync [TOTEM ] Retransmit List: 96 97
...
Let me known if more info would help!
pcs cluster report
CentOS 6.7 w/:
pacemaker-1.1.12-8.el6.x86_64
pcs-0.9.139-9.el6_7.1.x86_64
ccs-0.16.2-81.el6.x86_64
resource-agents-3.9.5-24.el6.x86_64
cman-3.0.12.1-73.el6.1.x86_64
corosync-1.4.7-2.el6.x86_64
[root@afnB ~]# pacemakerd --features
Pacemaker 1.1.11 (Build: 97629de) Supporting v3.0.9: generated-manpages agent-manpages ascii-docs ncurses libqb-logging libqb-ipc nagios corosync-plugin cman acls
[root@afnB ~]# corosync-quorumtool -l
Nodeid Name
1 afnA.mxi.tdcfoo
2 afnB.mxi.tdcfoo
[root@afnB ~]# corosync-quorumtool -s
Version: 1.4.7
Nodes: 2
Ring ID: 8
Quorum type: quorum_cman
Quorate: Yes
[root@afnB ~]# pcs status
Cluster name: afn-cluster
WARNING: no stonith devices and stonith-enabled is not false
Last updated: Fri Nov 6 01:35:30 2015
Last change: Fri Nov 6 01:29:37 2015
Stack: cman
Current DC: afna - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured
0 Resources configured
Online: [ afna afnb ]
Full list of resources:
[root@afnB ~]# cat /etc/cluster/cluster.conf
<cluster config_version="1" name="afn-cluster">
<fence_daemon/>
<clusternodes>
<clusternode name="afna" nodeid="1">
<fence>
<method name="pcmk-redirect">
<device name="pcmk" port="afna"/>
</method>
</fence>
</clusternode>
<clusternode name="afnb" nodeid="2">
<fence>
<method name="pcmk-redirect">
<device name="pcmk" port="afnb"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman/>
<fencedevices>
<fencedevice agent="fence_pcmk" name="pcmk"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>
[root@afnB ~]# grep -v '#' /etc/corosync/corosync.conf
compatibility: whitetank
totem
version: 2
secauth: off
threads: 0
window_size: 150
interface
ringnumber: 0
bindnetaddr: 10.45.69.0
mcastaddr: 239.255.15.1
mcastport: 5405
ttl: 1
logging
fileline: off
to_stderr: no
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
debug: off
timestamp: on
logger_subsys
subsys: AMF
debug: off
linux failovercluster pacemaker corosync
add a comment |
I’m trying to create cluster of two nodes, but it seems to behave a little strange following this guide, but I was unable to do, for example:
[root@afnA ~]# pcs property set stonith-enabled=false
Error: Unable to update cib
Call cib_replace failed (-62): Timer expired
only thing I find in logs are continued corosync
events:
Nov 06 01:30:54 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:30:56 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:30:57 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:30:59 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:31:01 corosync [TOTEM ] Retransmit List: 96 97
...
Let me known if more info would help!
pcs cluster report
CentOS 6.7 w/:
pacemaker-1.1.12-8.el6.x86_64
pcs-0.9.139-9.el6_7.1.x86_64
ccs-0.16.2-81.el6.x86_64
resource-agents-3.9.5-24.el6.x86_64
cman-3.0.12.1-73.el6.1.x86_64
corosync-1.4.7-2.el6.x86_64
[root@afnB ~]# pacemakerd --features
Pacemaker 1.1.11 (Build: 97629de) Supporting v3.0.9: generated-manpages agent-manpages ascii-docs ncurses libqb-logging libqb-ipc nagios corosync-plugin cman acls
[root@afnB ~]# corosync-quorumtool -l
Nodeid Name
1 afnA.mxi.tdcfoo
2 afnB.mxi.tdcfoo
[root@afnB ~]# corosync-quorumtool -s
Version: 1.4.7
Nodes: 2
Ring ID: 8
Quorum type: quorum_cman
Quorate: Yes
[root@afnB ~]# pcs status
Cluster name: afn-cluster
WARNING: no stonith devices and stonith-enabled is not false
Last updated: Fri Nov 6 01:35:30 2015
Last change: Fri Nov 6 01:29:37 2015
Stack: cman
Current DC: afna - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured
0 Resources configured
Online: [ afna afnb ]
Full list of resources:
[root@afnB ~]# cat /etc/cluster/cluster.conf
<cluster config_version="1" name="afn-cluster">
<fence_daemon/>
<clusternodes>
<clusternode name="afna" nodeid="1">
<fence>
<method name="pcmk-redirect">
<device name="pcmk" port="afna"/>
</method>
</fence>
</clusternode>
<clusternode name="afnb" nodeid="2">
<fence>
<method name="pcmk-redirect">
<device name="pcmk" port="afnb"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman/>
<fencedevices>
<fencedevice agent="fence_pcmk" name="pcmk"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>
[root@afnB ~]# grep -v '#' /etc/corosync/corosync.conf
compatibility: whitetank
totem
version: 2
secauth: off
threads: 0
window_size: 150
interface
ringnumber: 0
bindnetaddr: 10.45.69.0
mcastaddr: 239.255.15.1
mcastport: 5405
ttl: 1
logging
fileline: off
to_stderr: no
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
debug: off
timestamp: on
logger_subsys
subsys: AMF
debug: off
linux failovercluster pacemaker corosync
Firewalling. It's always firewalling. (disclaimer: may not always be firewalling)
– womble♦
Nov 6 '15 at 1:11
Nope have no FW between nodes :)
– Steffen Winther Sørensen
Nov 6 '15 at 10:30
Seem unicast fixed the issue ;) This was done by changing cman configuration to udpu transport in cluster.conf like this: <cman transport="udpu" port="5405" two_node="1" expected_votes="1” /> Assuming multicast in open vswitch is the culprit as multicasting stopped after 180s f.ex. with omping, also see here
– Steffen Winther Sørensen
Nov 6 '15 at 10:32
add a comment |
I’m trying to create cluster of two nodes, but it seems to behave a little strange following this guide, but I was unable to do, for example:
[root@afnA ~]# pcs property set stonith-enabled=false
Error: Unable to update cib
Call cib_replace failed (-62): Timer expired
only thing I find in logs are continued corosync
events:
Nov 06 01:30:54 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:30:56 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:30:57 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:30:59 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:31:01 corosync [TOTEM ] Retransmit List: 96 97
...
Let me known if more info would help!
pcs cluster report
CentOS 6.7 w/:
pacemaker-1.1.12-8.el6.x86_64
pcs-0.9.139-9.el6_7.1.x86_64
ccs-0.16.2-81.el6.x86_64
resource-agents-3.9.5-24.el6.x86_64
cman-3.0.12.1-73.el6.1.x86_64
corosync-1.4.7-2.el6.x86_64
[root@afnB ~]# pacemakerd --features
Pacemaker 1.1.11 (Build: 97629de) Supporting v3.0.9: generated-manpages agent-manpages ascii-docs ncurses libqb-logging libqb-ipc nagios corosync-plugin cman acls
[root@afnB ~]# corosync-quorumtool -l
Nodeid Name
1 afnA.mxi.tdcfoo
2 afnB.mxi.tdcfoo
[root@afnB ~]# corosync-quorumtool -s
Version: 1.4.7
Nodes: 2
Ring ID: 8
Quorum type: quorum_cman
Quorate: Yes
[root@afnB ~]# pcs status
Cluster name: afn-cluster
WARNING: no stonith devices and stonith-enabled is not false
Last updated: Fri Nov 6 01:35:30 2015
Last change: Fri Nov 6 01:29:37 2015
Stack: cman
Current DC: afna - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured
0 Resources configured
Online: [ afna afnb ]
Full list of resources:
[root@afnB ~]# cat /etc/cluster/cluster.conf
<cluster config_version="1" name="afn-cluster">
<fence_daemon/>
<clusternodes>
<clusternode name="afna" nodeid="1">
<fence>
<method name="pcmk-redirect">
<device name="pcmk" port="afna"/>
</method>
</fence>
</clusternode>
<clusternode name="afnb" nodeid="2">
<fence>
<method name="pcmk-redirect">
<device name="pcmk" port="afnb"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman/>
<fencedevices>
<fencedevice agent="fence_pcmk" name="pcmk"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>
[root@afnB ~]# grep -v '#' /etc/corosync/corosync.conf
compatibility: whitetank
totem
version: 2
secauth: off
threads: 0
window_size: 150
interface
ringnumber: 0
bindnetaddr: 10.45.69.0
mcastaddr: 239.255.15.1
mcastport: 5405
ttl: 1
logging
fileline: off
to_stderr: no
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
debug: off
timestamp: on
logger_subsys
subsys: AMF
debug: off
linux failovercluster pacemaker corosync
I’m trying to create cluster of two nodes, but it seems to behave a little strange following this guide, but I was unable to do, for example:
[root@afnA ~]# pcs property set stonith-enabled=false
Error: Unable to update cib
Call cib_replace failed (-62): Timer expired
only thing I find in logs are continued corosync
events:
Nov 06 01:30:54 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:30:56 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:30:57 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:30:59 corosync [TOTEM ] Retransmit List: 96 97
Nov 06 01:31:01 corosync [TOTEM ] Retransmit List: 96 97
...
Let me known if more info would help!
pcs cluster report
CentOS 6.7 w/:
pacemaker-1.1.12-8.el6.x86_64
pcs-0.9.139-9.el6_7.1.x86_64
ccs-0.16.2-81.el6.x86_64
resource-agents-3.9.5-24.el6.x86_64
cman-3.0.12.1-73.el6.1.x86_64
corosync-1.4.7-2.el6.x86_64
[root@afnB ~]# pacemakerd --features
Pacemaker 1.1.11 (Build: 97629de) Supporting v3.0.9: generated-manpages agent-manpages ascii-docs ncurses libqb-logging libqb-ipc nagios corosync-plugin cman acls
[root@afnB ~]# corosync-quorumtool -l
Nodeid Name
1 afnA.mxi.tdcfoo
2 afnB.mxi.tdcfoo
[root@afnB ~]# corosync-quorumtool -s
Version: 1.4.7
Nodes: 2
Ring ID: 8
Quorum type: quorum_cman
Quorate: Yes
[root@afnB ~]# pcs status
Cluster name: afn-cluster
WARNING: no stonith devices and stonith-enabled is not false
Last updated: Fri Nov 6 01:35:30 2015
Last change: Fri Nov 6 01:29:37 2015
Stack: cman
Current DC: afna - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured
0 Resources configured
Online: [ afna afnb ]
Full list of resources:
[root@afnB ~]# cat /etc/cluster/cluster.conf
<cluster config_version="1" name="afn-cluster">
<fence_daemon/>
<clusternodes>
<clusternode name="afna" nodeid="1">
<fence>
<method name="pcmk-redirect">
<device name="pcmk" port="afna"/>
</method>
</fence>
</clusternode>
<clusternode name="afnb" nodeid="2">
<fence>
<method name="pcmk-redirect">
<device name="pcmk" port="afnb"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman/>
<fencedevices>
<fencedevice agent="fence_pcmk" name="pcmk"/>
</fencedevices>
<rm>
<failoverdomains/>
<resources/>
</rm>
</cluster>
[root@afnB ~]# grep -v '#' /etc/corosync/corosync.conf
compatibility: whitetank
totem
version: 2
secauth: off
threads: 0
window_size: 150
interface
ringnumber: 0
bindnetaddr: 10.45.69.0
mcastaddr: 239.255.15.1
mcastport: 5405
ttl: 1
logging
fileline: off
to_stderr: no
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
debug: off
timestamp: on
logger_subsys
subsys: AMF
debug: off
linux failovercluster pacemaker corosync
linux failovercluster pacemaker corosync
edited May 20 at 19:17
chicks
3,08072033
3,08072033
asked Nov 6 '15 at 0:58
Steffen Winther SørensenSteffen Winther Sørensen
616
616
Firewalling. It's always firewalling. (disclaimer: may not always be firewalling)
– womble♦
Nov 6 '15 at 1:11
Nope have no FW between nodes :)
– Steffen Winther Sørensen
Nov 6 '15 at 10:30
Seem unicast fixed the issue ;) This was done by changing cman configuration to udpu transport in cluster.conf like this: <cman transport="udpu" port="5405" two_node="1" expected_votes="1” /> Assuming multicast in open vswitch is the culprit as multicasting stopped after 180s f.ex. with omping, also see here
– Steffen Winther Sørensen
Nov 6 '15 at 10:32
add a comment |
Firewalling. It's always firewalling. (disclaimer: may not always be firewalling)
– womble♦
Nov 6 '15 at 1:11
Nope have no FW between nodes :)
– Steffen Winther Sørensen
Nov 6 '15 at 10:30
Seem unicast fixed the issue ;) This was done by changing cman configuration to udpu transport in cluster.conf like this: <cman transport="udpu" port="5405" two_node="1" expected_votes="1” /> Assuming multicast in open vswitch is the culprit as multicasting stopped after 180s f.ex. with omping, also see here
– Steffen Winther Sørensen
Nov 6 '15 at 10:32
Firewalling. It's always firewalling. (disclaimer: may not always be firewalling)
– womble♦
Nov 6 '15 at 1:11
Firewalling. It's always firewalling. (disclaimer: may not always be firewalling)
– womble♦
Nov 6 '15 at 1:11
Nope have no FW between nodes :)
– Steffen Winther Sørensen
Nov 6 '15 at 10:30
Nope have no FW between nodes :)
– Steffen Winther Sørensen
Nov 6 '15 at 10:30
Seem unicast fixed the issue ;) This was done by changing cman configuration to udpu transport in cluster.conf like this: <cman transport="udpu" port="5405" two_node="1" expected_votes="1” /> Assuming multicast in open vswitch is the culprit as multicasting stopped after 180s f.ex. with omping, also see here
– Steffen Winther Sørensen
Nov 6 '15 at 10:32
Seem unicast fixed the issue ;) This was done by changing cman configuration to udpu transport in cluster.conf like this: <cman transport="udpu" port="5405" two_node="1" expected_votes="1” /> Assuming multicast in open vswitch is the culprit as multicasting stopped after 180s f.ex. with omping, also see here
– Steffen Winther Sørensen
Nov 6 '15 at 10:32
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "2"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f734346%2fwhy-new-two-node-cluster-not-behaving-right%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Server Fault!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fserverfault.com%2fquestions%2f734346%2fwhy-new-two-node-cluster-not-behaving-right%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Firewalling. It's always firewalling. (disclaimer: may not always be firewalling)
– womble♦
Nov 6 '15 at 1:11
Nope have no FW between nodes :)
– Steffen Winther Sørensen
Nov 6 '15 at 10:30
Seem unicast fixed the issue ;) This was done by changing cman configuration to udpu transport in cluster.conf like this: <cman transport="udpu" port="5405" two_node="1" expected_votes="1” /> Assuming multicast in open vswitch is the culprit as multicasting stopped after 180s f.ex. with omping, also see here
– Steffen Winther Sørensen
Nov 6 '15 at 10:32