{"id":378,"date":"2013-07-10T00:30:16","date_gmt":"2013-07-09T22:30:16","guid":{"rendered":"http:\/\/www.ludovicocaldara.net\/dba\/?p=378"},"modified":"2020-08-18T16:55:58","modified_gmt":"2020-08-18T14:55:58","slug":"oracle-rac-and-policy-managed-databases","status":"publish","type":"post","link":"https:\/\/www.ludovicocaldara.net\/dba\/oracle-rac-and-policy-managed-databases\/","title":{"rendered":"Oracle RAC and Policy Managed Databases"},"content":{"rendered":"<p>&nbsp;<\/p>\n<p>Some weeks ago I&#8217;ve commented a good post of Martin Bach (<a href=\"https:\/\/twitter.com\/MartinDBA\">@MartinDBA<\/a>\u00a0on Twitter, make sure to follow him!)<\/p>\n<p><a href=\"http:\/\/martincarstenbach.wordpress.com\/2013\/06\/17\/an-introduction-to-policy-managed-databases-in-11-2-rac\/\">http:\/\/martincarstenbach.wordpress.com\/2013\/06\/17\/an-introduction-to-policy-managed-databases-in-11-2-rac\/<\/a><\/p>\n<p>What I&#8217;ve realized by \u00a0is that Policy Managed Databases are not widely used and there is a lot misunderstanding on how it works and some concerns about implementing it in production.<\/p>\n<p>My current employer <a href=\"http:\/\/www.trivadis.com\/\">Trivadis<\/a>\u00a0(<a href=\"https:\/\/twitter.com\/Trivadis\">@Trivadis<\/a>, make sure to call us if your database needs a health check :-)) use PMDs as best practice, so it&#8217;s worth to spend some words on it. Isn&#8217;t it?<\/p>\n<p><strong>\u00a0Why Policy Managed Databases?<\/strong><\/p>\n<p>PMDs are an efficient way to manage and consolidate several databases and services with the least effort. They rely on Server Pools. Server pools are used to partition physically a big cluster into smaller groups of servers (Server Pool). Each pool have three main properties:<\/p>\n<ul>\n<li><span style=\"line-height: 15px;\">A <strong>minumim<\/strong>\u00a0number of servers required to compose the group<\/span><\/li>\n<li>A <strong>maximum<\/strong> number of servers<\/li>\n<li>A <strong>priority<\/strong> that make a server pool more important than others<\/li>\n<\/ul>\n<p>If the cluster loses a server, the following rules apply:<\/p>\n<ul>\n<li><span style=\"line-height: 15px;\">If a pool has less than\u00a0<strong>min<\/strong><strong> servers<\/strong>, a server is moved from a pool that has more than min servers, starting with the one with lowest priority.<\/span><\/li>\n<li>If a pool has less than <strong>min servers<\/strong> and no other pools have more than min servers, the server is moved from the server with the lowest priority.<\/li>\n<li>Poolss with higher priority may give servers to pools with lower priority if the min server property is honored.<\/li>\n<\/ul>\n<p>This means that if a serverpool has the greatest priority, all other server pools can be reduced to satisfy the number of min servers.<\/p>\n<p>Generally speaking, when creating a policy managed database (can be existent off course!) it is assigned to a server pool rather than a single server. The pool is seen as an abstract resource where you can put workload on.<\/p>\n<p><a href=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_descr.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-394\" src=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_descr-300x151.png\" alt=\"SRVPOOL_descr\" width=\"300\" height=\"151\" srcset=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_descr-300x151.png 300w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_descr-500x252.png 500w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_descr.png 892w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>If you read the definition of Cloud Computing given by the NIST (<a href=\"http:\/\/csrc.nist.gov\/publications\/nistpubs\/800-145\/SP800-145.pdf\">http:\/\/csrc.nist.gov\/publications\/nistpubs\/800-145\/SP800-145.pdf<\/a>) you&#8217;ll find something similar:<\/p>\n<blockquote><p>Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared<br \/>\npool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that<br \/>\ncan be rapidly provisioned and released with minimal management effort or service provider interaction<\/p>\n<p>&nbsp;<\/p><\/blockquote>\n<p>There are some major benefits in using policy managed databases (that&#8217;s my solely opinion):<\/p>\n<ol>\n<li><span style=\"line-height: 15px;\">PMD instances are created\/removed automatically. This means that you can add and remove nodes nodes to\/from the server pools or the whole cluster, the underlying databases will be expanded or shrinked following the new topology.<\/span><\/li>\n<li>Server Pools (that are the base for PMDs) allow to give different priorities to different groups of servers. This means that if correctly configured, you can loose several physical nodes without impacting your most critical applications and without reconfiguring the instances.<\/li>\n<li>PMD are the base for <a href=\"http:\/\/docs.oracle.com\/cd\/E11882_01\/server.112\/e24611\/apqos_intro.htm#CHDEJFBD\">Quality of Service management<\/a>, a 11gR2 feature that does resource management cluster-wide to achieve predictable performances on critical applications\/transactions. QOS is a really advanced topic so I warn you: do not use it without appropriate knowledge. Again,\u00a0<a href=\"http:\/\/www.trivadis.com\/\">Trivadis<\/a>\u00a0has deep knowledge on it so you may want to contact us for a consulting service (and why not, perhaps I&#8217;ll try to blog about it in the future).<\/li>\n<li>RAC One Node databases (RONDs?) can work beside PMDs to avoid instance proliferation for non critical applications.<\/li>\n<li>Oracle is pushing it to achieve maximum flexibility for the Cloud, so it&#8217;s a trendy technology that&#8217;s cool to implement!<\/li>\n<li>I&#8217;ll find some other reasons, for sure! \ud83d\ude42<\/li>\n<\/ol>\n<p><strong>What changes in real-life DB administration?<\/strong><\/p>\n<p>Well, the concept of having a relation Server -&gt; Instance disappears, so at the very beginning you&#8217;ll have to be prepared to something dynamic (but once configured, things don&#8217;t change often).<\/p>\n<p>As Martin <a href=\"http:\/\/martincarstenbach.wordpress.com\/2010\/02\/12\/server-pool-experiments-in-rac-11-2\/\">pointed out in his blog<\/a>, you&#8217;ll need to configure server pools and think about pools of resources rather than individual configuration items.<\/p>\n<p>The spfile doesn&#8217;t contain any information related to specific instances, so the parameters must be database-wide.<\/p>\n<p>The\u00a0<strong>oratab<\/strong> will contain only the dbname, not the instance name, and the dbname is present in oratab disregarding if the server belongs to a serverpool or another.<\/p>\n<pre class=\"toolbar-overlay:false lang:default decode:true\">+ASM1:\/oracle\/grid\/11.2.0.3:N           # line added by Agent\r\nPMU:\/oracle\/db\/11.2.0.3:N               # line added by Agent\r\nTST:\/oracle\/db\/11.2.0.3:N               # line added by Agent<\/pre>\n<p>Your scripts should take care of this.<\/p>\n<p>Also, when connecting to your database, you should rely on services and access your database remotely rather than trying to figure out where the instances are running. But if you really need it you can get it:<\/p>\n<pre class=\"lang:default decode:true\"># srvctl status database -d PMU\r\nInstance PMU_4 is running on node node2\r\nInstance PMU_2 is running on node node3\r\nInstance PMU_3 is running on node node4\r\nInstance PMU_5 is running on node node6\r\nInstance PMU_1 is running on node node7\r\nInstance PMU_6 is running on node node8<\/pre>\n<p>An approach for the crontab: every DBA soon or late will need to schedule tasks within the crond. Since the RAC have multiple nodes, you don&#8217;t want to run the same script many times but rather choose which node will execute it.<\/p>\n<p>My personal approach (every DBA has his personal preference) is to check the instance with cardinality 1 and match it with the current node. e.g.:<\/p>\n<pre class=\"lang:default decode:true\"># [ `crsctl stat res ora.tst.db -k 1 | grep STATE=ONLINE | awk '{print $NF}'` == `uname -n` ]\r\n# echo $?\r\n0\r\n\r\n# [ `crsctl stat res ora.tst.db -k 1 | grep STATE=ONLINE | awk '{print $NF}'` == `uname -n` ]\r\n# echo $?\r\n1<\/pre>\n<p>In the example, TST_1 is running on node1, so the first evaluation returns TRUE. The second evaluation is done after the node2, so it returns FALSE.<\/p>\n<p>This trick can be used to have an identical crontab on every server and choose at the runtime if the local server is the preferred to run tasks for the specified database.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>A proof of concept with Policy Managed Databases<\/strong><\/p>\n<p>My good colleague <a href=\"http:\/\/ch.linkedin.com\/pub\/jacques-kostic\/39\/190\/706\">Jacques Kostic<\/a> has given me the access to a enterprise-grade private lab so I can show you some &#8220;live operations&#8221;.<\/p>\n<p>Let&#8217;s start with the actual topology: it&#8217;s an 8-node stretched RAC with ASM diskgroups with failgroups on the remote site.<\/p>\n<p><a href=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_ARCH.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone  wp-image-381\" src=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_ARCH-300x225.png\" alt=\"RAC_ARCH\" width=\"300\" height=\"225\" srcset=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_ARCH-300x225.png 300w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_ARCH-400x300.png 400w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_ARCH.png 960w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>This should be enough to show you some capabilities of server pools.<\/p>\n<p><strong>The Generic and Free server pools<\/strong><\/p>\n<p>After a clean installation, you&#8217;ll end up with two default server pools:<\/p>\n<p><a href=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_empty.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-386\" src=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_empty-300x75.png\" alt=\"SRVPOOL_empty\" width=\"300\" height=\"75\" srcset=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_empty-300x75.png 300w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_empty-500x125.png 500w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_empty.png 840w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>The Generic one will contain all non-PMDs (if you use only PMDs it will be empty). The Free one will own servers that are &#8220;spare&#8221;, when all server pools have reached the maximum size thus they&#8217;re not requiring more servers.<\/p>\n<p><strong>\u00a0New server pools<\/strong><\/p>\n<p>Actually the cluster I&#8217;m working on has two serverpools already defined (PMU and TST):<\/p>\n<p><a href=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_new.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-387\" src=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_new-300x154.png\" alt=\"SRVPOOL_new\" width=\"300\" height=\"154\" srcset=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_new-300x154.png 300w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_new-500x257.png 500w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_new.png 895w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>(the node assignment in the graphic is not relevant here).<\/p>\n<p>They have been created with a command like this one:<\/p>\n<pre># srvctl add serverpool -g PMU -l 5 -u 6 -i 3<\/pre>\n<pre># srvctl add serverpool -g TST -l 2 -u 3 -i 2<\/pre>\n<p>&#8220;srvctl -h &#8221; is a good starting point to have a quick reference of the syntax.<\/p>\n<p>You can check the status \u00a0with:<\/p>\n<pre class=\"lang:default decode:true\"># srvctl status serverpool\r\nServer pool name: Free\r\nActive servers count: 0\r\nServer pool name: Generic\r\nActive servers count: 0\r\nServer pool name: PMU\r\nActive servers count: 6\r\nServer pool name: TST\r\nActive servers count: 2<\/pre>\n<p>and the configuration:<\/p>\n<pre class=\"lang:default decode:true\"># srvctl config serverpool\r\nServer pool name: Free\r\nImportance: 0, Min: 0, Max: -1\r\nCandidate server names:\r\nServer pool name: Generic\r\nImportance: 0, Min: 0, Max: -1\r\nCandidate server names:\r\nServer pool name: PMU\r\nImportance: 3, Min: 5, Max: 6\r\nCandidate server names:\r\nServer pool name: TST\r\nImportance: 2, Min: 2, Max: 3\r\nCandidate server names:<\/pre>\n<p>&nbsp;<\/p>\n<p><strong>Modifying the configuration of serverpools<\/strong><\/p>\n<p>In this scenario, PMU is too big. The sum of minumum nodes is 2+5=7 nodes, so I have only one server that can be used for another server pool without falling below the minimum number of nodes.<\/p>\n<p>I want to make some room to make another server pool composed of two or three nodes, so I reduce the serverpool PMU:<\/p>\n<pre class=\"toolbar-overlay:false lang:default decode:true\"># srvctl modify serverpool -g PMU -l 3<\/pre>\n<p>Notice that PMU maxsize is still 6, so I don&#8217;t have free servers yet.<\/p>\n<pre># srvctl status database -d PMU\r\nInstance PMU_4 is running on node node2\r\nInstance PMU_2 is running on node node3\r\nInstance PMU_3 is running on node node4\r\nInstance PMU_5 is running on node node6\r\nInstance PMU_1 is running on node node7\r\nInstance PMU_6 is running on node node8<\/pre>\n<p>So, if I try to create another serverpool I&#8217;m warned that some resources can be taken offline:<\/p>\n<pre class=\"lang:default decode:true\"># srvctl add serverpool -g LUDO -l 2 -u 3 -i 1\r\nPRCS-1009 : Failed to create server pool LUDO\r\nPRCR-1071 : Failed to register or update server pool ora.LUDO\r\nCRS-2736: The operation requires stopping resource 'ora.pmu.db' on server 'node8'\r\nCRS-2736: The operation requires stopping resource 'ora.pmu.db' on server 'node3'\r\nCRS-2737: Unable to register server pool 'ora.LUDO' as this will affect running resources, but the force option was not specified<\/pre>\n<p>The clusterware proposes to stop 2 instances from the db pmu on the serverpool PMU because it can reduce from 6 to 3, but I have to confirm the operation with the flag -f.<\/p>\n<p>Modifying the serverpool layout can take time if resources have to be started\/stopped.<\/p>\n<pre class=\"toolbar-overlay:false lang:default decode:true\"># srvctl status serverpool\r\nServer pool name: Free\r\nActive servers count: 0\r\nServer pool name: Generic\r\nActive servers count: 0\r\nServer pool name: LUDO\r\nActive servers count: 2\r\nServer pool name: PMU\r\nActive servers count: 4\r\nServer pool name: TST\r\nActive servers count: 2<\/pre>\n<p>My new serverpool is finally composed by two nodes only, because I&#8217;ve set an importance of 1 (PMU wins as it has an importance of 3).<\/p>\n<p><strong>Inviting RAC One Node databases to the party<\/strong><\/p>\n<p>Now that I have some room on my new serverpool, I can start creating new databases.<\/p>\n<p>With PMD I can add two types of databases: <strong>RAC<\/strong> or <strong>RACONDENODE<\/strong>. Depending on the choice, I&#8217;ll have a database running on<strong> ALL NODES OF THE SERVER POOL<\/strong> or on<strong> ONE NODE ONLY<\/strong>. This is a kind of limitation in my opinion, hope Oracle will improve it in the near future: would be great to specify the cardinality also at database level.<\/p>\n<p>Creating a RAC One DB is as simple as selecting two radio box during in the dbca &#8220;standard&#8221; procedure:<\/p>\n<p><a href=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_one.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-395\" src=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_one-300x215.png\" alt=\"RAC_one\" width=\"300\" height=\"215\" srcset=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_one-300x215.png 300w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_one-418x300.png 418w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_one.png 761w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>The Server Pool can be created or you can specify an existent one (as in this lab):<\/p>\n<p><a href=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_one_pool.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-396\" src=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_one_pool-300x215.png\" alt=\"RAC_one_pool\" width=\"300\" height=\"215\" srcset=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_one_pool-300x215.png 300w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_one_pool-418x300.png 418w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/RAC_one_pool.png 760w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>I&#8217;ve created two new RAC One Node databases:<\/p>\n<ul>\n<li><span style=\"line-height: 15px;\">DB LUDO (service PRISM :-))<\/span><\/li>\n<li>DB VICO (service CHEERS)<\/li>\n<\/ul>\n<p>I&#8217;ve ended up with something like this:<\/p>\n<pre class=\"toolbar-overlay:false lang:default decode:true\">--------------------------------------------------------------------------------\r\nNAME           TARGET  STATE        SERVER                   STATE_DETAILS\r\n--------------------------------------------------------------------------------\r\nora.ludo.db   &lt;&lt;&lt;&lt;&lt; RAC ONE\r\n      1        ONLINE  ONLINE       node8                    Open\r\nora.ludo.prism.svc\r\n      1        ONLINE  ONLINE       node8\r\nora.pmu.db\r\n      1        ONLINE  ONLINE       node7                    Open\r\n      2        ONLINE  ONLINE       node4                    Open\r\n      3        ONLINE  ONLINE       node5                    Open\r\n      4        ONLINE  ONLINE       node6                    Open\r\nora.tst.db\r\n      1        ONLINE  ONLINE       node1                    Open\r\n      2        ONLINE  ONLINE       node2                    Open\r\nora.vico.cheers.svc\r\n      1        ONLINE  ONLINE       node3\r\nora.vico.db  &lt;&lt;&lt;&lt; RAC ONE\r\n      1        ONLINE  ONLINE       node3                    Open<\/pre>\n<p>That can be represented with this picture:<\/p>\n<p><a href=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_final.png\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-397\" src=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_final-300x153.png\" alt=\"SRVPOOL_final\" width=\"300\" height=\"153\" srcset=\"https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_final-300x153.png 300w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_final-500x255.png 500w, https:\/\/www.ludovicocaldara.net\/dba\/wp-content\/uploads\/2013\/07\/SRVPOOL_final.png 777w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>RAC One Node databases can be managed as always with online relocation (it&#8217;s still called O-Motion?)<\/p>\n<p><strong>Losing the nodes<\/strong><\/p>\n<p>With this situation, what happens if I loose (stop) one node?<\/p>\n<pre class=\"toolbar-overlay:false lang:default decode:true\"># crsctl stop cluster -n node8\r\nCRS-2673: Attempting to stop 'ora.crsd' on 'node8'\r\nCRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node8'\r\nCRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node8'\r\nCRS-2673: Attempting to stop 'ora.ludo.prism.svc' on 'node8'\r\nCRS-2677: Stop of 'ora.ludo.prism.svc' on 'node8' succeeded\r\nCRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node8' succeeded\r\nCRS-2673: Attempting to stop 'ora.node8.vip' on 'node8'\r\nCRS-2677: Stop of 'ora.node8.vip' on 'node8' succeeded\r\nCRS-2672: Attempting to start 'ora.node8.vip' on 'node4'\r\nCRS-2676: Start of 'ora.node8.vip' on 'node4' succeeded\r\nCRS-2673: Attempting to stop 'ora.ludo.db' on 'node8'\r\nCRS-2677: Stop of 'ora.ludo.db' on 'node8' succeeded\r\nCRS-2672: Attempting to start 'ora.ludo.db' on 'node3'\r\nCRS-2676: Start of 'ora.ludo.db' on 'node3' succeeded\r\nCRS-2672: Attempting to start 'ora.ludo.prism.svc' on 'node3'\r\nCRS-2676: Start of 'ora.ludo.prism.svc' on 'node3' succeeded\r\nCRS-2673: Attempting to stop 'ora.GRID.dg' on 'node8'\r\nCRS-2673: Attempting to stop 'ora.DATA.dg' on 'node8'\r\nCRS-2673: Attempting to stop 'ora.FRA.dg' on 'node8'\r\nCRS-2673: Attempting to stop 'ora.RECO.dg' on 'node8'\r\nCRS-2677: Stop of 'ora.DATA.dg' on 'node8' succeeded\r\nCRS-2677: Stop of 'ora.FRA.dg' on 'node8' succeeded\r\nCRS-2677: Stop of 'ora.RECO.dg' on 'node8' succeeded\r\nCRS-2677: Stop of 'ora.GRID.dg' on 'node8' succeeded\r\nCRS-2673: Attempting to stop 'ora.asm' on 'node8'\r\nCRS-2677: Stop of 'ora.asm' on 'node8' succeeded\r\nCRS-2673: Attempting to stop 'ora.ons' on 'node8'\r\nCRS-2677: Stop of 'ora.ons' on 'node8' succeeded\r\nCRS-2673: Attempting to stop 'ora.net1.network' on 'node8'\r\nCRS-2677: Stop of 'ora.net1.network' on 'node8' succeeded\r\nCRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node8' has completed\r\nCRS-2677: Stop of 'ora.crsd' on 'node8' succeeded\r\nCRS-2673: Attempting to stop 'ora.ctssd' on 'node8'\r\nCRS-2673: Attempting to stop 'ora.evmd' on 'node8'\r\nCRS-2673: Attempting to stop 'ora.asm' on 'node8'\r\nCRS-2677: Stop of 'ora.evmd' on 'node8' succeeded\r\nCRS-2677: Stop of 'ora.asm' on 'node8' succeeded\r\nCRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node8'\r\nCRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node8' succeeded\r\nCRS-2677: Stop of 'ora.ctssd' on 'node8' succeeded\r\nCRS-2673: Attempting to stop 'ora.cssd' on 'node8'\r\nCRS-2677: Stop of 'ora.cssd' on 'node8' succeeded<\/pre>\n<p>The node was belonging to the pool LUDO, however I have this situation right after:<\/p>\n<pre class=\"toolbar-overlay:false lang:default decode:true\"># srvctl status serverpool\r\nServer pool name: Free\r\nActive servers count: 0\r\nServer pool name: Generic\r\nActive servers count: 0\r\nServer pool name: LUDO\r\nActive servers count: 2\r\nServer pool name: PMU\r\nActive servers count: 3\r\nServer pool name: TST\r\nActive servers count: 2<\/pre>\n<p>A server has been taken from the pol PMU and given to the pool LUDO. This is because PMU was having one more server than his minimum server requirement.<\/p>\n<p>&nbsp;<\/p>\n<p>Now I can loose one node at time, I&#8217;ll have the following situation:<\/p>\n<ul>\n<li><span style=\"line-height: 15px;\">1 node lost: <strong>PMU 3<\/strong>, TST 2, LUDO 2<\/span><\/li>\n<li>2 nodes lost: PMU 3, TST 2<strong>, LUDO 1<\/strong> (as PMU is already on min and has higher priority, LUDO is penalized because has the lowest priority)<\/li>\n<li>3 nodes lost:PMU 3, TST 2,<strong> LUDO 0<\/strong> (as LUDO has the lowest priority)<\/li>\n<li>4 nodes lost: PMU 3, <strong>TST 1<\/strong>, LUDO 0<\/li>\n<li>5 nodes lost: PMU 3, <strong>TST 0<\/strong>, LUDO 0<\/li>\n<\/ul>\n<p>So, my hyper-super-critical application will still have three nodes to have plenty of resources to run even with a multiple physical failure, as it is the server pool with the highest priority and a minimum required server number of 3.<\/p>\n<p><strong>\u00a0What I would ask to Santa if I\u2019ll be on the Nice List (ad if Santa works at Redwood Shores)<\/strong><\/p>\n<p>Dear Santa, I would like:<\/p>\n<ul>\n<li>To create databases with node cardinality, to have for example 2 instances in a 3 nodes server pool<\/li>\n<li>Server Pools that are aware of the physical location when I use stretched clusters, so I could end up always with &#8220;at least one active instance per site&#8221;.<\/li>\n<\/ul>\n<p>Think about it \ud83d\ude09<\/p>\n<p>&#8212;<\/p>\n<p>Ludovico<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; Some weeks ago I&#8217;ve commented a good post of Martin Bach (@MartinDBA\u00a0on Twitter, make sure to follow him!) http:\/\/martincarstenbach.wordpress.com\/2013\/06\/17\/an-introduction-to-policy-managed-databases-in-11-2-rac\/ What I&#8217;ve realized by \u00a0is that Policy Managed Databases are not widely used and there is a lot misunderstanding on &hellip; <a href=\"https:\/\/www.ludovicocaldara.net\/dba\/oracle-rac-and-policy-managed-databases\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[327,326,3,149,132],"tags":[19,99,97,22,95,98,23,100],"class_list":["post-378","post","type-post","status-publish","format-standard","hentry","category-oracle-maa","category-oracle","category-oracledb","category-oracle-rac","category-triblog","tag-cluster","tag-clusterware","tag-grid-infrastructure","tag-oracle-database","tag-oracle-rac","tag-policy-managed","tag-rac","tag-rac-one-node"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/posts\/378","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/comments?post=378"}],"version-history":[{"count":20,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/posts\/378\/revisions"}],"predecessor-version":[{"id":962,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/posts\/378\/revisions\/962"}],"wp:attachment":[{"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/media?parent=378"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/categories?post=378"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.ludovicocaldara.net\/dba\/wp-json\/wp\/v2\/tags?post=378"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}