Migrating Oracle RAC from SuSE to OEL (or RHEL) live

I have a customer that needs to migrate its Oracle RAC cluster from SuSE to OEL.

I know, I know, there is a paper from Dell and Oracle named:

How Dell Migrated from SUSE Linux to Oracle Linux

That explains how Dell migrated its many RAC clusters from SuSE to OEL. The problem is that they used a different strategy:

– backup the configuration of the nodes
– then for each node, one at time
– stop the node
– reinstall the OS
– restore the configuration and the Oracle binaries
– relink
– restart

What I want to achieve instead is:
add one OEL node to the SuSE cluster as new node
– remove one SuSE node from the now-mixed cluster
– install/restore/relink the RDBMS software (RAC) on the new node
– move the RAC instances to the new node (taking care to NOT run more than the number of licensed nodes/CPUs at any time)
– repeat (for the remaining nodes)

because the customer will also migrate to new hardware.

In order to test this migration path, I’ve set up a SINGLE NODE cluster (if it works for one node, it will for two or more).

I have to setup the new node addition carefully, mainly as I would do with a traditional node addition:

  • Add new ip addresses (public, private, vip) to the DNS/hosts
  • Install the new OEL server
  • Keep the same user and groups (uid, gid, etc)
  • Verify the network connectivity and setup SSH equivalence
  • Check that the multicast connection is ok
  • Add the storage, configure persistent naming (udev) and verify that the disks (major, minor, names) are the very same
  • The network cards also must be the very same

Once the new host ready, the cluvfy stage -pre nodeadd will likely fail due to

  • Kernel release mismatch
  • Package mismatch

Here’s an example of output:

So the problem is not if the check succeed or not (it will not), but what fails.

Solving all the problems not related to the difference SuSE-OEL is crucial, because the addNode.sh will fail with the same errors.  I need to run it using -ignorePrereqs and -ignoreSysPrereqs switches. Let’s see how it works:

Then, as stated by the addNode.sh, I run the root.sh and I expect it to work:

Bingo! Let’s check if everything is up and running:

So yes, it works, but remember that it’s not a supported long-term configuration.

In my case I expect to migrate the whole cluster from SLES to OEL in one day.

NOTE: using OEL6 as new target is easy because the interface names do not change. The new OEL7 interface naming changes, if you need to migrate without cluster downtime you need to setup the new OEL7 nodes following this post: http://ask.xmodulo.com/change-network-interface-name-centos7.html

Otherwise, you need to configure a new interface name for the cluster with oifcfg.

HTH

Ludovico

Oracle RAC and the Private Cloud. And why small customers are not implementing it. Not yet.

Cloud. What a wonderful word. Wonderful and gray.
If you are involved in the Oracle Community, blogs and conferences, you certainly care about it and have perhaps your own conception of it or ideas about how to implement it.

My Collaborate 2015 RAC SIG experience

During the last Collaborate Conference, I’ve “tried” to animate the traditional RAC SIG Round-Table  with this topic:

In the last few years, cloud computing and infrastructure optimization have been the leading topics that guided the IT innovation. What’s the role of Oracle RAC in this context?

During this meeting leading RAC specialists, product managers, RAC SIG representatives and RAC Attack Ninjas will come together and discuss with you about the new Oracle RAC 12c features for the private cloud and the manageability of RAC environments.

Join us for the great discussion. This is your chance to have a great networking session!

Because it’s the RAC SIG meeting, most of the participants DO HAVE a RAC environment to manage, and are looking for best practices and ideas to improve it, or maybe they want to share their experiences.

I’ve started the session by asking how many people are currently operating a private cloud and how many would like to implement it.

With my biggest surprise (so big that I felt immediately uncomfortable), except one single person, nobody raised the hand.

What?

I’ve spent a very bad minute, I was almost speechless. I was actually asking myself: “is my conception of private cloud wrong?”. Then my good friend Yury came in help and we started the discussion about the RAC features that enable private cloud capabilities. During those 30 minutes, almost no users intervened. Then Oracle Product Managers (RAC, ASM, QoS, Cloud) started explaining their point of view, and I suddenly realized that

when talking about Private Cloud, there is a huge gap between the Oracle Private Cloud implementation best practices and the small customers skills and budgets.

When Oracle product managers talk about Private Cloud, they target big companies and advice to plan the infrastructure using:

  • Exadata
  • Full-pack of options for a total of 131k per CPU:
    • Enterprise Edition (47.5k)
    • Multitenant (17.5k)
    • Real Application Clusters (23k)
    • Partitioning (11.5k)
    • Diagnostic Pack (7.5k)
    • Tuning Pack (5k)
    • Lifecycle Management Pack (12k)
    • Cloud Management Pack (7.5k)
  • Flex Cluster
  • Policy Managed Databases
  • Quality of Services Management
  • Rapid Home provisioning
  • Enterprise Manager and DBaaS Self Service portal

The CapEx needed for such a stack is definitely a show stopper for most small-medium companies. And it’s not only about the cost. When I gave my presentation about Policy Managed Databases at Collaborate in 2014, and later about Multitenant and MAA at Open World, it was clear that “almost” nobody (let’s say less than 5%, just to give an idea) uses these new technologies. Many of them are new and, in some cases, not stable. Notably, Multitenant and QoS are not working together as of now. Qos will work with the new resource manager at PDB level only in release 12.2 (and still not guaranteed).

For the average company (or the average DBA), there is more than enough to be scared about, so private cloud is not seen as easy to implement.

So there’s no private cloud solution for SMBs?

It really depends on what you want to achieve, and at which level.

Based on my experience at Trivadis, I can say that you can achieve Private Cloud for less. Much less.

What a Private Cloud should guarantee? According to its NIST definition, five four things:

  1. On-demand self-service.
  2. Broad network access.
  3. Resource pooling.
  4. Rapid elasticity.
  5. Measured service.

Number 5 is a clear field of EM, and AWR Warehouse new feature may be of great help, for free  (but still, you can do a lot on your own with Statspack and some scripting if you are crazy enough to do it without Diagnostic pack).

Numbers 3 and 4 are a peculiarity of RAC, and they are included in the EE+RAC license. By leveraging OVM, there are very good opportunities of savings if the initial sizing of the solution is a problem. With OVM you can start as small as you want.

Number 1 depends on standards and automation already in place at your company. Generally speaking, nowadays scripting automatic provisioning with DBCA and APEX is very simple. If you’re not comfortable with coding, tools like the Trivadis Toolbox make this task easier. Moreover, nobody said that the self-service provisioning must be done through a web interface by the final user. It might be (and usually is) triggered by an event, like the creation of a service request, so you can keep web development outside of your cloud.

Putting all together

You can create a basic Private Cloud that fits perfectly your needs without spending or changing too much in your RAC environment.

Automation doesn’t mean cost, you can do it on your own and keep it simple. If you need an advice, ideas or some help, just drop me an email (myfirstname.mylastname@trivadis.com), it would be great to discuss about your need for private cloud!

Things can be less complex than what we often think. Our imagination is the new limit!

Ludovico

#C15LV RAC Attack wrap

Did I say in some of my previous posts that I love RAC Attack? I love it more during Collaborate conference, because doing it as a pre-conference workshop is just the right way the get people involved and go straight to the goal: learning while having fun together.

We had way less people than expected but it still has been a great success!

The t-shirts have been great for coloring the room: as soon as people finished the installation of the first Linux VM, they’ve got one t-shirt.
DSC04023

Look at the room at the beginning of the workshop:

DSC04034

 

after a few hours, the room looked better! New ninjas, red stack, happy participants 🙂

DSC04118

We had a very special guest today. Mr. RAC PM in person has tried and validated our installation instructions 😉
DSC04049

We got pizza again, but because of restrictions at the convention center, it has been a beer-free afternoon 🙁

Thank you anyway to the OTN for sponsoring almost everything!!

DSC04100

 

Looking forward to organize the next RAC Attack, Thank you guys!! 🙂

DSC04106
DSC04134

CCasAEUUEAA7bvL

 

 

 

Ludo

 

It’s time to Collaborate again!!

Collaborate15_Horizontal_LogoIn a little more than a couple of weeks, the great Collaborate conference will start again.

My agenda will be quite packed again, as speaker, panelist and workshop organizer:

Date/Time Event
08/04/2015
3:15 pm - 4:15 pm
Oracle RAC, Data Guard, and Pluggable Databases: When MAA Meets Oracle Multitenant
IOUG Collaborate 15, Las Vegas NV
08/04/2015
4:30 pm - 5:30 pm
Panel: Nothing to BLOG About - Think Again
IOUG Collaborate 15, Las Vegas NV
12/04/2015
9:00 am - 4:00 pm
RAC Attack! 12c
IOUG Collaborate 15, Las Vegas NV
15/04/2015
5:30 pm - 6:00 pm
IOUG RAC SIG Meeting
IOUG Collaborate 15, Las Vegas NV

 

RAC Attack! 12c

This technical workshop and networking event (never forget it’s a project created several years ago thanks to an intuition of Jeremy Schneider), confirms to be one of the best, long-living projects in the Oracle Community. It certainly boosted my Community involvement up to becoming an Oracle ACE. This year I’m the coordinator of the organization of the workshop, it’s a double satisfaction and it will certainly be a lot of fun again. Did I said that it’s already full booked? I’ve already blogged about it (and about what the lucky participants will get) here.

 

Oracle RAC, Data Guard, and Pluggable Databases: When MAA Meets Oracle Multitenant 

One of my favorite presentations, I’ve presented it already at OOW14 and UKOUG Tech14, but it’s still a very new topic for most people, even the most experienced DBAs. You’ll learn how Multitenant, RAC and Data Guard work together. Expect colorful architecture schemas and a live demo!  You can read more about it in this post.

 

Panel: Nothing to BLOG About – Think Again

My friend Michael Abbey (Pythian) invited me to participate in his panel about blogging. It’s my first time as panelist, so I’m very excited!

 

IOUG RAC SIG Meeting

Missing this great networking event is not an option! I’m organizing this session as RAC SIG board member (Thanks to the IOUG for this opportunity!). We’ll focus on Real Application Clusters role in the private cloud and infrastructure optimization. We’ll have many special guests, including Oracle RAC PM Markus Michalewicz, Oracle QoS PM Mark Scardina and Oracle ASM PM James Williams.

Can you ever miss it???

 

A good Trivadis representative!!

trivadis.com

This year I’m not going to Las Vegas alone. My Trivadis colleague Markus Flechtner , one of the most expert RAC technologists I have the chance to know, will also come and present a session about RAC diagnostics:

615: RAC Clinics- Starring Dr. ORACHK, Dr CHM and Dr. TFA

Mon. April 13| 9:15 AM – 10:15 AM | Room Palm D

If you speak German you can follow his nice blog: http://oracle.markusflechtner.de/

Looking forward to meet you there

Ludovico

Moving Clusterware Interconnect from single NIC/Bond to HAIP

Very recently I had to configure a customer’s RAC private interconnect from bonding to HAIP to get benefit of both NICs.

So I would like to recap here what the hypothetic steps would be if you need to do the same.

In this example I’ll switch from a single-NIC interconnect (eth1) rather than from a bond configuration, so if you are familiar with the RAC Attack! environment you can try to put everything in place on your own.

First, you need to plan the new network configuration in advance, keeping in mind that there are a couple of important restrictions:

  1. Your interconnect interface naming must be uniform on all nodes in the cluster. The interconnect uses the interface name in its configuration and it doesn’t support different names on different hosts
  2. You must bind the different private interconnect interfaces in different subnets (see Note: 1481481.1 – 11gR2 CSS Terminates/Node Eviction After Unplugging one Network Cable in Redundant Interconnect Environment if you need an explanation)

 

Implementation 

The RAC Attack book uses one interface per node for the interconnect (eth1, using network 172.16.100.0)

To make things a little more complex, we’ll not use the eth1 in the new HAIP configuration, so we’ll test also the deletion of the old interface.

What you need to do is add two new interfaces (host only in your virtualbox) and configure them as eth2 and eth3, e.g. in networks 172.16.101.0 and 172.16.102.0)

 

modify /var/named/racattack in order to use the new addresses (RAC doesn’t care about logical names, it’s just for our convenience):

add also the reverse lookup in  in-addr.arpa:

 

restart  named on the first node and check that both nodes can ping all the names correctly:

check the nodes that compose the cluster:

on all nodes, make a copy of the gpnp profile.xml (just in case, the oifcfg tool does the copy automatically)

List the available networks:

Get the current ip configuration for the interconnect:

one one node only, set the new interconnect interfaces:

check that the other nodes has received the new configuration:

Before deleting the old interface, it would be sensible to stop your cluster resources (in some cases, one of the nodes may be evicted), in any case the cluster must be restarted completely in order to get the new interfaces working.

Note: having three interfaces in a HAIP interconnect is perfectly working, HAIP works from 2 to 4 interfaces. I’m showing how to delete eth1 just for information!! 🙂

on all nodes, shutdown the CRS:

Now you can disable the old interface:

and modify the parameter ONBOOT=no inside the configuration script of eth1 interface.

Start the cluster again:

And check that the resources are up & running:

 

 Testing the high availability

Disconnect cable from one of the two interfaces (virtually if you’re in virtualbox 🙂 )

Pay attention at the NO-CARRIER status (in eth2 in this example):

check that the CRS is still up & running:

 

The virtual interface eth2:1 as failed over on the second interface as eth3:2

 

After the cable is reconnected, the virtual interface is back on eth2:

 

Further information

For this post I’ve used a RAC version 11.2, but RAC 12c use the very same procedure.

You can discover more here about HAIP:

http://docs.oracle.com/cd/E11882_01/server.112/e10803/config_cw.htm#HABPT5279 

And here about how to set it (beside this post!):

https://docs.oracle.com/cd/E11882_01/rac.112/e41959/admin.htm#CWADD90980

https://docs.oracle.com/cd/E11882_01/rac.112/e41959/oifcfg.htm#BCGGEFEI

 

Cheers

Ludo

Boost your Oracle RAC manageability with Policy-Managed Databases

The slides of my presentation about Policy-managed databases. I’ve used them to present at Collaborate14 (#C14LV).

The same abstract has been refused by OOW14 and UKOUG_TECH14 selection committees, so it’s time to publish them 🙂

RAC Attack 12c at Collaborate 14 and advanced labs

I’ve just published an advanced lab on SlideShare that RAC Attack attendees may do at Collaborate this year, instead of just doing the basic 2-node RAC installation on their laptop.

We’ll offer also an advanced lab about Flex Clusters and Flex ASM (written by Maaz Anjum). Moreover, I’m working on an additional lab that allows to implement a multi-node RAC by using Virtual Box linked clones and GI Home clones like I’ve shown in my previous post.

RAC Attack at #C14LV will be like fun again. We’ll have a few t-shirts for the attendants, designed by me and Chet “Oraclenerd” Justice, kindly sponsored by OTN.

The workshop will end up with beers and snaks (again, thank you OTN for sponsoring this :-)).

2014_03_27_23_46_49_OraclenerdNinjasFINAL

 

If you’re planning to attend Collaborate, join us and start your conference week in a good mood 🙂

 

How many Oracle instances can be consolidated on a single server?

According to Exadata consolidation guide, this is what you can consolidate on Oracle specialized Hardware:

NOTE: The maximum number of database instances per cluster is 512 for Oracle 11g Release 1 and higher. An upper limit of 128 database instances per X2-2 or X3-2 database node and 256 database instances per X2-8 or X3-8 database node is recommended. The actual number of database instances per database node or cluster depends on application workload and their corresponding system resource consumption.

 

But how many instances are actually beeing consolidated by DBAs from all around the world?

I’ve asked it to the Twitter community

I’ve sent this tweet a couple of weeks ago and I would like to consolidate some replies into a single blog post.

 

My customer environment however, was NOT a production one. On the production they have 45.

Some replies…

 

 

 

Wissem cores 73 on a production system, 1TB memory!

 

Chris correctly suggests to give a try to the new 12c consolidation features:

 

Kevin, as a great expert, already experimented one hundred instances environment:

But Bertrand impresses with his numbers!

 

 

 

 

 

Intel platform with 1TB of RAM = Xeon E7, suggests Kevin:

 

 

 

Flashdba has seen 87 instances on a single host, but on a Multi-node RAC: but still huge and complex!

 

 

 

Conclusion

Does this thread of tweets reply to the question? Are you planning to consolidate your Oracle environment? If you have questions about how to plan your consolidation, don’t hesitate to get in touch! 🙂

Ludo

My Agenda at Oracle Open World 2013

183033-oow-tlkt-joinme-250x250-1951091For truth’s sake, I wasn’t planning to head at Oracle Open World this year. I’ve never had this opportunity, and the same days my company is planning it’s great internal conference (Trivadis Tech Event) that I always enjoy and I hope will be made public soon.

But this summer I’ve started contributing heavily to the rewriting of the RAC Attack project, now focusing on Oracle RAC 12c.

So I’ve seized the opportunity and asked my managers to send me at OOW (with partial success). The final result is that I’m attending Oracle Open World and I’ll be glad to meet everyone of you! 🙂 I’ll also mentor the RAC Attack (Operation Ninja) event, so make sure you come at the OTN Lounge, lobby of Moscone South, Tuesday and Wednesday between 10:00AM and 2:00PM and meet the whole team of RAC experts.

operation_ninja

 

My Agenda

I’m very struggled with timing conflicts between the many sessions I would like to attend. It’s a unique opportunity to meet and listen to all my favorite bloggers, technologists and tweeps,  and will be a pity to miss many of their sessions.

However, I’ve ended up with this Agenda, it’s a semi-definitive one, but I reserve the right to change things the last minute, so follow me on twitter during these days.

OOW_agenda_2013

You may notice the two huge slots I’ve reserved for the RAC Attack, and many sessions I’ll follow at the Oak Table World 2013. Most of my favorite technologists will head there.

I’ll also attend to two fitness events on Sunday and Monday, if you’re brave enough you can join us! 🙂

And as Yury says…

“If YOU or any of YOUR team‘s members participate in Oracle OpenWorld 2013 conference please please reach me. I would love to MEET UP.”

Ludovico

Oracle RAC and Policy Managed Databases

 

Some weeks ago I’ve commented a good post of Martin Bach (@MartinDBA on Twitter, make sure to follow him!)

http://martincarstenbach.wordpress.com/2013/06/17/an-introduction-to-policy-managed-databases-in-11-2-rac/

What I’ve realized by  is that Policy Managed Databases are not widely used and there is a lot misunderstanding on how it works and some concerns about implementing it in production.

My current employer Trivadis (@Trivadis, make sure to call us if your database needs a health check :-)) use PMDs as best practice, so it’s worth to spend some words on it. Isn’t it?

 Why Policy Managed Databases?

PMDs are an efficient way to manage and consolidate several databases and services with the least effort. They rely on Server Pools. Server pools are used to partition physically a big cluster into smaller groups of servers (Server Pool). Each pool have three main properties:

  • A minumim number of servers required to compose the group
  • A maximum number of servers
  • A priority that make a server pool more important than others

If the cluster loses a server, the following rules apply:

  • If a pool has less than min servers, a server is moved from a pool that has more than min servers, starting with the one with lowest priority.
  • If a pool has less than min servers and no other pools have more than min servers, the server is moved from the server with the lowest priority.
  • Poolss with higher priority may give servers to pools with lower priority if the min server property is honored.

This means that if a serverpool has the greatest priority, all other server pools can be reduced to satisfy the number of min servers.

Generally speaking, when creating a policy managed database (can be existent off course!) it is assigned to a server pool rather than a single server. The pool is seen as an abstract resource where you can put workload on.

SRVPOOL_descr

If you read the definition of Cloud Computing given by the NIST (http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf) you’ll find something similar:

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared
pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that
can be rapidly provisioned and released with minimal management effort or service provider interaction

 

There are some major benefits in using policy managed databases (that’s my solely opinion):

  1. PMD instances are created/removed automatically. This means that you can add and remove nodes nodes to/from the server pools or the whole cluster, the underlying databases will be expanded or shrinked following the new topology.
  2. Server Pools (that are the base for PMDs) allow to give different priorities to different groups of servers. This means that if correctly configured, you can loose several physical nodes without impacting your most critical applications and without reconfiguring the instances.
  3. PMD are the base for Quality of Service management, a 11gR2 feature that does resource management cluster-wide to achieve predictable performances on critical applications/transactions. QOS is a really advanced topic so I warn you: do not use it without appropriate knowledge. Again, Trivadis has deep knowledge on it so you may want to contact us for a consulting service (and why not, perhaps I’ll try to blog about it in the future).
  4. RAC One Node databases (RONDs?) can work beside PMDs to avoid instance proliferation for non critical applications.
  5. Oracle is pushing it to achieve maximum flexibility for the Cloud, so it’s a trendy technology that’s cool to implement!
  6. I’ll find some other reasons, for sure! 🙂

What changes in real-life DB administration?

Well, the concept of having a relation Server -> Instance disappears, so at the very beginning you’ll have to be prepared to something dynamic (but once configured, things don’t change often).

As Martin pointed out in his blog, you’ll need to configure server pools and think about pools of resources rather than individual configuration items.

The spfile doesn’t contain any information related to specific instances, so the parameters must be database-wide.

The oratab will contain only the dbname, not the instance name, and the dbname is present in oratab disregarding if the server belongs to a serverpool or another.

Your scripts should take care of this.

Also, when connecting to your database, you should rely on services and access your database remotely rather than trying to figure out where the instances are running. But if you really need it you can get it:

An approach for the crontab: every DBA soon or late will need to schedule tasks within the crond. Since the RAC have multiple nodes, you don’t want to run the same script many times but rather choose which node will execute it.

My personal approach (every DBA has his personal preference) is to check the instance with cardinality 1 and match it with the current node. e.g.:

In the example, TST_1 is running on node1, so the first evaluation returns TRUE. The second evaluation is done after the node2, so it returns FALSE.

This trick can be used to have an identical crontab on every server and choose at the runtime if the local server is the preferred to run tasks for the specified database.

 

A proof of concept with Policy Managed Databases

My good colleague Jacques Kostic has given me the access to a enterprise-grade private lab so I can show you some “live operations”.

Let’s start with the actual topology: it’s an 8-node stretched RAC with ASM diskgroups with failgroups on the remote site.

RAC_ARCH

This should be enough to show you some capabilities of server pools.

The Generic and Free server pools

After a clean installation, you’ll end up with two default server pools:

SRVPOOL_empty

 

The Generic one will contain all non-PMDs (if you use only PMDs it will be empty). The Free one will own servers that are “spare”, when all server pools have reached the maximum size thus they’re not requiring more servers.

 New server pools

Actually the cluster I’m working on has two serverpools already defined (PMU and TST):

SRVPOOL_new

(the node assignment in the graphic is not relevant here).

They have been created with a command like this one:

“srvctl -h ” is a good starting point to have a quick reference of the syntax.

You can check the status  with:

and the configuration:

 

Modifying the configuration of serverpools

In this scenario, PMU is too big. The sum of minumum nodes is 2+5=7 nodes, so I have only one server that can be used for another server pool without falling below the minimum number of nodes.

I want to make some room to make another server pool composed of two or three nodes, so I reduce the serverpool PMU:

Notice that PMU maxsize is still 6, so I don’t have free servers yet.

So, if I try to create another serverpool I’m warned that some resources can be taken offline:

The clusterware proposes to stop 2 instances from the db pmu on the serverpool PMU because it can reduce from 6 to 3, but I have to confirm the operation with the flag -f.

Modifying the serverpool layout can take time if resources have to be started/stopped.

My new serverpool is finally composed by two nodes only, because I’ve set an importance of 1 (PMU wins as it has an importance of 3).

Inviting RAC One Node databases to the party

Now that I have some room on my new serverpool, I can start creating new databases.

With PMD I can add two types of databases: RAC or RACONDENODE. Depending on the choice, I’ll have a database running on ALL NODES OF THE SERVER POOL or on ONE NODE ONLY. This is a kind of limitation in my opinion, hope Oracle will improve it in the near future: would be great to specify the cardinality also at database level.

Creating a RAC One DB is as simple as selecting two radio box during in the dbca “standard” procedure:

RAC_one

The Server Pool can be created or you can specify an existent one (as in this lab):

RAC_one_pool

 

I’ve created two new RAC One Node databases:

  • DB LUDO (service PRISM :-))
  • DB VICO (service CHEERS)

I’ve ended up with something like this:

That can be represented with this picture:

SRVPOOL_final

 

RAC One Node databases can be managed as always with online relocation (it’s still called O-Motion?)

Losing the nodes

With this situation, what happens if I loose (stop) one node?

The node was belonging to the pool LUDO, however I have this situation right after:

A server has been taken from the pol PMU and given to the pool LUDO. This is because PMU was having one more server than his minimum server requirement.

 

Now I can loose one node at time, I’ll have the following situation:

  • 1 node lost: PMU 3, TST 2, LUDO 2
  • 2 nodes lost: PMU 3, TST 2, LUDO 1 (as PMU is already on min and has higher priority, LUDO is penalized because has the lowest priority)
  • 3 nodes lost:PMU 3, TST 2, LUDO 0 (as LUDO has the lowest priority)
  • 4 nodes lost: PMU 3, TST 1, LUDO 0
  • 5 nodes lost: PMU 3, TST 0, LUDO 0

So, my hyper-super-critical application will still have three nodes to have plenty of resources to run even with a multiple physical failure, as it is the server pool with the highest priority and a minimum required server number of 3.

 What I would ask to Santa if I’ll be on the Nice List (ad if Santa works at Redwood Shores)

Dear Santa, I would like:

  • To create databases with node cardinality, to have for example 2 instances in a 3 nodes server pool
  • Server Pools that are aware of the physical location when I use stretched clusters, so I could end up always with “at least one active instance per site”.

Think about it 😉

Ludovico