Ludovico is a member of the Oracle Database High Availability (HA), Scalability & Maximum Availability Architecture (MAA) Product Management team in Oracle.
He focuses on Oracle Data Guard, Flashback technologies, and Cloud MAA.
The “RHPHELP_preNodeAddVal” might already give an idea of the cause: something related to the “cluvfy stage -pre nodeadd” evaluation that we normally do when adding a node by hand. FPP does not really run cluvfy, but it calls the same primitives cluvfy is based on.
In FPP, when the error does not give any useful information, this is the flow to follow:
use “rhpctl query audit” to get the date and time of the failing operation
open the “rhpserver.log.0” and look for the operation log in that time frame
get the UID of the operation e.g., in the following line it is “1556344143”:
PRGO-1260:Cluster Verification checks fordatabase home provisioning failed forthe specified working copy WC_db_19_11_FPPC.
PRCR-1178:Execution of commandfailed on one ormorenodes
PRVF-7546:The work directory"/tmp/CVU_19.0.0.0.0_oracle/"cannot be used on node"fppc02"
This is often related to the filesystem /tmp that has the “noexec” option:
Shell
1
2
$mount|grep/tmp
tmpfs on/tmp typetmpfs(rw,nosuid,nodev,noexec)
Although it is tempting to just remount the filesystem with “exec”, you might be in this situation because your systems are configured to adhere to the STIG recommendations:
If you attend Oracle-related events or if you are active on Twitter or other social medias used by technologists, you might know many of us Product Managers directly. If it is the case, you know that we are in general very easy to reach and always happy to help.
When you contact us directly, however, sometimes we answer “Please open a SR for that“. Somehow irritating, huh? “We had chats and drinks together at conferences and now this bureaucracy?” This is understandable. Who likes opening SRs after all? Isn’t just easier to forward that e-mail internally and get the answer first hand?
This is something that happened to me as well in the past when I was not working for Oracle yet, and that still happens with me now (the answer coming from me, as PM).
Why? The first answer is “it depends on the question“. If it is anything that we can answer directly, we will probably do it.
It might be a question about a specific feature: “Does product X support Y?”, “can you add this feature in your product?” or a known problem for which the PM already knows the bug (in that case is just a matter of looking up the bug number), or anything that is relatively easy to answer: “What are the best practices for X?”, “Do you have a paper explaining that?”, “Does this bug have a fix already?”
But there is a plethora of questions for which we need more information.
“I try this, but it does not work“. “I get this error and I think it is a bug“. “I have THIS performance problem“.
This is when I’d personally ask to open a SR most of the times (unless I have a quick answer to give). And there are a few reasons:
Data protection
Oracle takes data protection very seriously. Oracle employees are trained to deal with potentially sensitive data and cannot forward customer information via e-mail. That could be exposed or forwarded to the wrong recipients by mistake, etc. We don’t ask for TFA collections or logs via e-mail (even if sometimes customers send them to us anyway…).
There are special privileges required to access customer SRs, that’s the only secure way we provide to transfer logs and protected information. The files uploaded into the SRs must be accessed through a specific application. All the checkouts and downloads are tracked. When we need to forward customer information internally, we just specify the SR number and let our colleagues access the information themselves. Sometimes we use SRs just as placeholder to exchange data with customers, without having a support engineer working on it.
This is the single most important point that somehow makes the other points irrelevant. But still the remaining ones are good points.
Important pieces in the discussion do not get lost
The answer does not always come from first-hand… it might take 3-4 hops (sometimes more) and analysis, comments, explanations, discussions.
E-mail is not a good tool for this. Long threads can split and include just part of the audience (the “don’t reply to all” effect). Attachments are deleted when replying instead of forwarding… and pieces get lost.
This is where you would use a Jira, or a trouble ticketing system. Guess which is the one that Oracle uses for its customers? 🙂
MOS has internal views to dig into TFA logs (that’s why it is a good idea to provide one, whenever it might be relevant), and all the attachments, comments and internal discussions are centralized there. But we need a SR to add information to!
If you discover something new from a technical discussion, what do you do? Do you share it or do you keep it for yourself? MOS is part of our knowledge base and it is a good idea to store important discussions in it. Support engineers can find solutions in SRs with similar cases. It is a good opportunity for the support engineer him/herself to be involved in one more interesting discussion, so next time he/she might have the answer on top of the fingers.
To conclude, think about it as a win-win. You give us interesting problems that might help improving the product, and you get a Guardian Angel on your SR for free 😉
FPP was still young and lacking many features at that time, but it already changed the way I’ve worked during the next years. I embraced the out of place patching, developed some basic scripts to install Oracle Homes, and sought automation and standardization at all costs:
And discovered that meanwhile, FPP did giant steps, with many new features and fixes for quite a few usability and performance problems.
Last year, when joining the Oracle Database High Availability (HA), Scalability, and Maximum Availability Architecture (MAA) Product Management Team at Oracle, I took (among others) the Product Manager role for FPP.
Becoming an Oracle employee after 20 years of working with Oracle technology is a big leap. It allows me to understand how big the company is, and how collaborative and friendly the Oracle employees are (Yes, I was used to marketing nonsense, insisting salesmen, and unfriendly license auditors. This is slowly changing with Oracle embracing the Cloud, but it is still a fresh wound for many customers. Expect this to change even more! Regarding me… I’ll be the same I’ve always been 🙂 ).
Now I have daily meetings with big customers (bigger than the ones I have ever had in the past), development teams, other product managers, Oracle consultants, and community experts. My primary goal is to make the product better, increasing its adoption, and helping customers having the best experience with it. This includes testing the product myself, writing specs, presentations, videos, collecting feedback from the customers, tracking bugs, and manage escalations.
I am a Product Manager for other products as well, but I have to admit that FPP is the product that takes most of my Product Manager time. Why?
I will give a few reasons in my next blog post(s).
I have been a DBA/consultant for customers and big production environments for over twenty years. I have explained more or less my career path in this blog post.
Database (and application) high availability has always been one of my favorite areas. Over the years I have become a high availability expert (my many blog posts are there to confirm it) and I have spent a lot of time building, troubleshooting, teaching, presenting, advocating these gems of technology that are RAC, Data Guard, Application Continuity and the many other products that are part of the Oracle Maximum Availability Architecture solution. Customers fear downtime, and I have always been with them on that. But in my case, it looks like Yoda’s famous quote worked well for me (in a good way):
I’ll be joining the Oracle Maximum Availability Architecture Product Management team as MAA Product Manager (or rather Cloud MAA, I will not explain here ;-)) next November.
(for those who are not familiar with the joke, the “Dark Side” is how we often refer to the Oracle employees in the Oracle Community 😉 )
I remember just like if it was yesterday that I was presenting some Data Guard 12c new features in front of a big audience at Collaborate 2014. There I have met two incredible people that were part of the MAA Product Management team: Larry Carpenter and Markus Michalewicz. Larry has been a great source of inspiration to improve my seniority and ease of presenting in front of the public, while Markus has become a friend over the years in addition of being one of the most influent persons in my professional network.
Now I have got the opportunity to join that team, and I feel like it’s the most natural change to do in my career.
And because I imagine some of you will have some questions, there are some answers to questions I’ve been frequently asked so far:
MAA PM does not mean becoming team lead or supervising other colleagues, I’ll be a “regular” PM
I will stay in Switzerland and work remotely from here
I will stay in “the conference circus” and keep presenting as soon the COVID-19 situation will allow to do so
Yes, I was VERY happy in Trivadis and it will always have a special place in my heart
One of the challenges of automation in bin Oracle Environments is dealing with tnsnames.ora files.
These files might grow big and are sometimes hard to distribute/maintain properly.
The worst is when manual modifications are needed: manual operations, if not made carefully, can screw up the connection to the databases.
The best solution is always using LDAP naming resolution. I have seen customers using OID, OUD, Active Directory, openldapd, all with a great level of control and automation. However, some customer don’t have/want this possibility and keep relying on TNS naming resolution.
When Data Guard (and eventually RAC) are in place, the tnsnames.ora gets filled by entries for the DGConnectIdentifiers and StaticConnectIdentifier. If I add the observer, an additional entry is required to access the dbname_CFG service created by the Fast Start Failover.
Actually, all these entries are not required if I use Easy Connect.
The basic configuration with Data Guard is quite simple to achieve with Easy Connect. In this examples I have:
– The primary database TOOLCDB1_SITE1
– The duplicated database for standby TOOLCDB1_SITE2
After setting up the static registration (no Grid Infrastructure in my lab):
Lisp
1
2
3
4
5
6
7
8
SID_LIST_LISTENER=
(SID_LIST=
(SID_DESC=
(GLOBAL_DBNAME=TOOLCDB1_SITE1_DGMGRL)
(SID_NAME=TOOLCDB1)
(ORACLE_HOME=/u01/app/oracle/product/db_19_8_0)
)
)
and copying the passwordfile, the configuration can be created with:
Oracle PL/SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
DGMGRL> create configuration TOOLCDB1 as primary database is TOOLCDB1_SITE1 connect identifier is 'newbox01:1521/TOOLCDB1_SITE1';
Configuration "toolcdb1" created with primary database "toolcdb1_site1"
DGMGRL> edit database TOOLCDB1_SITE1 set property 'StaticConnectIdentifier'='newbox01:1521/TOOLCDB1_SITE1_DGMGRL';
Property "StaticConnectIdentifier" updated
DGMGRL> add database TOOLCDB1_SITE2 as connect identifier is 'newbox02:1521/TOOLCDB1_SITE2';
Database "toolcdb1_site2" added
DGMGRL> edit database TOOLCDB1_SITE2 set property 'StaticConnectIdentifier'='newbox02:1521/TOOLCDB1_SITE2_DGMGRL';
Property "StaticConnectIdentifier" updated
DGMGRL> enable configuration;
Enabled.
That’s it.
Now, if I want to have the configuration observed, I need to activate the Fast Start Failover:
Oracle PL/SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
DGMGRL> edit database toolcdb1_site1 set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database toolcdb1_site2 set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database toolcdb1_site1 set property FastStartFailoverTarget='toolcdb1_site2';
Property "faststartfailovertarget" updated
DGMGRL> edit database toolcdb1_site2 set property FastStartFailoverTarget='toolcdb1_site1';
Property "faststartfailovertarget" updated
DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.
DGMGRL> enable fast_start failover;
Enabled in Zero Data Loss Mode.
With just two databases, FastStartFailoverTarget is not explicitly needed, but I usually do it as other databases might be added to the configuration in the future.
After that, the broker complains that FSFO is enabled but there is no observer yet:
Oracle PL/SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
DGMGRL> show fast_start failover;
Fast-Start Failover: Enabled in Zero Data Loss Mode
Protection Mode: MaxAvailability
Lag Limit: 0 seconds
Threshold: 180 seconds
Active Target: toolcdb1_site2
Potential Targets: "toolcdb1_site2"
toolcdb1_site2 valid
Observer: (none)
Shutdown Primary: TRUE
Auto-reinstate: TRUE
Observer Reconnect: 180 seconds
Observer Override: FALSE
Configurable Failover Conditions
Health Conditions:
Corrupted Controlfile YES
Corrupted Dictionary YES
Inaccessible Logfile NO
Stuck Archiver NO
Datafile Write Errors YES
Oracle Error Conditions:
(none)
DGMGRL> show configuration;
Configuration - toolcdb1
Protection Mode: MaxAvailability
Members:
toolcdb1_site1 - Primary database
Warning: ORA-16819: fast-start failover observer not started
toolcdb1_site2 - (*) Physical standby database
Fast-Start Failover: Enabled in Zero Data Loss Mode
Configuration Status:
WARNING (status updated 39 seconds ago)
Observer for multiple configurations
This feature has been introduced in 12.2 but it is still not widely used.
Before 12.2, the Observer was a foreground process: the DBAs had to start it in a wrapper script executed with nohup in order to keep it live.
Since 12.2, the observer can run as a background process as far as there is a valid wallet for the connection to the databases.
Also, 12.2 introduced the capability of starting multiple configurations with a single dgmgrl command: “START OBSERVING”.
First, I need a wallet. And here comes the first compromise:
Having a single dgmgrl session to start all my configurations means that I have a single wallet for all the databases that I want to observe.
Fair enough, all the DBs (CDBs?) are managed by the same team in this case.
If I have only observers on my host I can easily point to the wallet from my central sqlnet.ora:
Which connection descriptors do I need?
The Observer uses the DGConnectIdentifier to keep observing the databases, but needs a connection to both of them using the TOOLCDB1_CFG service (unless I specify something different with the broker configuration property ConfigurationWideServiceName) to connect to the configuration and get the DGConnectIdentifier information. Again, you can check it in the doc. or the note Oracle 12.2 – Simplified OBSERVER Management for Multiple Fast-Start Failover Configurations (Doc ID 2285891.1)
So I need to specify three secrets for three connection descriptors:
Oracle Secret Store Tool Release 21.0.0.0.0 - Production
Version 21.0.0.0.0
Copyright (c) 2004, 2020, Oracle and/or its affiliates. All rights reserved.
Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:
The first one will be used for the initial connection. The other two to observe the Primary and Standby.
I need to be careful that the first EZConnect descriptor matches EXACTLY what I put in observer.ora (see next step) and the last two match my DGConnectIdentifier (unless I specify something different with ObserverConnectIdentifier), otherwise I will get some errors and the observer will not observe correctly (or will not start at all).
The dgmgrl needs then a file named observer.ora.
$ORACLE_BASE/admin/observers or the central TNS_ADMIN would be good locations, but what if I have observers that must be started from multiple Oracle Homes?
In that case, having a observer.ora in $ORACLE_HOME/network/admin (or $ORACLE_BASE/homes/{OHNAME}/network/admin/ if Read-Only Oracle Home is enabled) would be a better solution: in this case I would need to start one session per Oracle Home
The content of my observer.ora must be something like:
This is the example for my configuration, but I can put as many (CONFIG=…) as I want in order to observe multiple configurations.
Then, if everything is configured properly, I can start all the observers with a single command:
Oracle PL/SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
DGMGRL> SET OBSERVERCONFIGFILE=/u01/app/oracle/admin/observers/observer.ora
DGMGRL> START OBSERVING
ObserverConfigFile=observer.ora
observer configuration file parsing succeeded
Submitted command "START OBSERVER" using connect identifier "newbox01,newbox02:1521/TOOLCDB1_CFG"
Check superobserver.log, individual observer logs and Data Guard Broker logs for execution details.
In my config, the Oracle Database version is 19.7 and the databases are actually CDBs. No Grid Infrastructure, non-OMF datafiles.
It is important to highlight that a lot of things have changed since 12.1. And because 19c is the LTS version now, it does not make sense to try anything older.
First, I just want to make sure that my standbys are aligned.
PR00 (PID:6718): Media Recovery Waiting for T-1.S-41
So, yeah, not having OMF might get you some warnings like: WARNING: File being created with same name as in Primary
But it is good to know that the cascade standby deals well with new PDBs.
Of course, this is not of big interest as I know that the problem with Multitenant comes from CLONING PDBs from either local or remote PDBs in read-write mode.
So absolutely the same behavior between the two levels of standby.
According to the documentation: https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/CREATE-PLUGGABLE-DATABASE.html#GUID-F2DBA8DD-EEA8-4BB7-A07F-78DC04DB1FFC
I quote what is specified for the parameter STANDBYS={ALL|NONE|…}: “If you include a PDB in a standby CDB, then during standby recovery the standby CDB will search for the data files for the PDB. If the data files are not found, then standby recovery will stop and you must copy the data files to the correct location before you can restart recovery.”
“Specify ALL to include the new PDB in all standby CDBs. This is the default.”
“Specify NONE to exclude the new PDB from all standby CDBs. When a PDB is excluded from all standby CDBs, the PDB’s data files are unnamed and marked offline on all of the standby CDBs. Standby recovery will not stop if the data files for the PDB are not found on the standby. […]”
So, in order to avoid the MRP to crash, I should have included STANDBYS=NONE
But the documentation is not up to date, because in my case the PDB is skipped automatically and the recovery process DOES NOT STOP:
However, the recovery is marked ENABLED for the PDB on the standby, while usind STANDBYS=NONE it would have been DISABLED.
Oracle PL/SQL
1
2
3
4
5
6
7
1*selectname,recovery_statusfromv$pdbs
NAMERECOVERY
------------------------------ --------
PDB$SEEDENABLED
LATERALUSENABLED
PNEUMAENABLED
So, another difference with the doc who states: “You can enable a PDB on a standby CDB after it was excluded on that standby CDB by copying the data files to the correct location, bringing the PDB online, and marking it as enabled for recovery.”
This reflects the findings of Philippe Fierens in his blog (http://pfierens.blogspot.com/2020/04/19c-data-guard-series-part-iii-adding.html).
This behavior has been introduced probably between 12.2 and 19c, but I could not manage to find exactly when, as it is not explicitly stated in the documentation.
However, I remember well that in 12.1.0.2, the MRP process was crashing.
In my configuration, not on purpose, but interesting for this article, the first standby has the very same directory structure, while the cascaded standby has not.
In any case, there is a potentially big problem for all the customers implementing Multitenant on Data Guard:
With the old behaviour (MRP crashing), it was easy to spot when a PDB was cloned online into a primary database, because a simple dgmgrl “show configuration” whould have displayed a warning because of the increasing lag (following the MRP crash).
With the current behavior, the MRP keeps recovering and the “show configuration” displays “SUCCESS” despite there is a PDB not copied on the standby (thus not protected).
The missing PDB is easy to spot once I know that I have to do it. However, for each PDB to recover (I might have many!), I have to prepare the rename of datafiles and creation of directory (do not forget I am using non-OMF here).
Now, the datafile names on the standby got changed to …/UNNAMEDnnnnn.
So I have to get the original ones from the primary database and do the same replace that db_file_name_convert would do:
Oracle PL/SQL
1
2
3
4
settrimon
colrename_filefora300
setlines400
select'set newname for datafile '||file#||' to '''||replace(name,'/TOOLCDB1/','/TOOLCDX1/')||''';'asrename_filefromv$datafilewherecon_id=6;
and put this in a rman script (this will be for the second standby, the first has the same name so same PATH):
Then, I need to stop the recovery, start it and stopping again, put the datafiles online and finally restart the recover.
These are the same steps used my Philippe in his blog post, just adapted to my taste 🙂
Now, I do not have anymore any datafiles offline on the standby:
Oracle PL/SQL
1
2
3
SQL>select'ERROR: CON_ID '||con_id||' has '||count(*)||' datafiles offline!'fromv$recover_filewhereonline_status='OFFLINE'groupbycon_id;
norowsselected
I will not publish the steps for the second standby, they are exactly the same (same output as well).
At the end, for me it is important to highlight that monitoring the OFFLINE datafiles on the standby becomes a crucial point to guarantee the health of Data Guard in Multitenant. Relying on the Broker status or “PDB recovery disabled” is not enough.
On the bright side, it is nice to see that Cascade Standby configurations do not introduce any variation, so cascaded standbys can be threated the same as “direct” standby databases.
When consolidating to multitenant, there are several consolidation patterns.
Big, complex databases usually have special requirements for which it might be a good choice to go to single-tenant (a single PDB in one CDB)
Small, relatively easy databases are the best candidate for consolidation to multitenant
Schema consolidated databases require special attention, but in general there are several advantages to convert individual schemas (or group of schemas) to individual PDBs
For the latter, there are some techniques to convert a schema in a PDB.
export/import (obviously), with eventually Golden Gate to do it online
Transportable tablespaces (if the schemas follow strict 1-to-1 tablespace separation
partial PDB cloning
We will focus on the last one for this blog post.
Situation
Here we have a PDB with some schemas, each of them has a dedicated tablespace, but accidentally, two of them have also some objects on a common tablespace.
This happens frequently when all the users have quota on the default database tablespace and they do not have necessarily a personal default tablespace.
This is the typical situation where transportable tablespaces become hard to achieve without some upfront segment movement, as tablespaces are not self-contained.
Thankfully, Oracle Multitenant allows us to clone a PDB from a remote one and specify only a subset of tablespaces.
Not at all! The tables are still there and reference to tablespaces that do not exist anymore. Possible?
Actually, the tablespaces definition are still there if we look at v$tablespace:
Oracle PL/SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
SQL>selectnamefromv$tablespace;
NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
DATA
ABC
DEF
GHI
8rowsselected.
If we give a look at the DBA_TABLESPACES view definition, there are a few interesting filters:
Oracle PL/SQL
1
2
3
4
5
6
7
createorreplaceviewDBA_TABLESPACES
...
wherets.online$!=3
andbitand(flags,2048)!=2048
andbitand(ts.flags,16777216)!=16777216
...
/
What is their meaning?
Oracle PL/SQL
1
2
3
4
5
6
online$/* status (see KTT.H): */
/* 1 = ONLINE, 2 = OFFLINE, 3 = INVALID */
flags/* various flags: see ktt3.h */
/* 0x800 = this actually describes a group */-- 2048
/* 0x1000000 = bad transport of ts in pdb */-- 16777216
So the first WHERE clause skips all the INVALID TABLESPACES (when you drop a tablespace it is still stored in ts$ with this state), the second skips the definition of TEMPORARY TABLESPACE GROUPS, the third one is actually our candidate.
Indeed, this is what we get from ts$ for these tablespaces:
But the user GHI, who has no objects anymore, is still there.
Oracle PL/SQL
1
2
3
4
5
6
7
8
SQL>selectusernamefromdba_users
2whereusernamein('ABC','DEF','GHI');
USERNAME
-------------------------
GHI
DEF
ABC
So we need to drop it explicitly.
Oracle PL/SQL
1
2
3
4
5
SQL>dropuserghicascade;
Userdropped.
SQL>
Automate the cleanup
This is an example PL/SQL that is aimed to automate the cleanup.
Actually:
Users that had segments in one of the excluded tablespaces but do not have any segments left are just LOCKED (for security reasons, you can guess why).
Tablespaces that meet the “excluded PDB” criteria, are just dropped
Oracle PL/SQL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
setserveroutputon
DECLARE
-- get the users that had tables in the non-cloned TBSP that do not have any other segments...
dbms_output.put_line('Cannot drop default tablespace '||r_tbsps.name||'. Please change the default tablespace and drop it afterwards.');
END;
ENDLOOP;
END;
/
This is the output for the clone procedure we have just seen:
Oracle PL/SQL
1
2
3
4
5
6
7
8
9
SQL>@cleanup_partial_pdb.sql
ALTERUSERGHIACCOUNTLOCK
UserGHIlocked.
DROPTABLESPACEDEFINCLUDINGCONTENTSANDDATAFILES
TablespaceDEFdropped.
DROPTABLESPACEGHIINCLUDINGCONTENTSANDDATAFILES
TablespaceGHIdropped.
PL/SQLproceduresuccessfullycompleted.
The PL/SQL block can be quite slow depending on the segments and tablespaces, so it might be a good idea to have a custom script instead of this automated one.
What about user DEF?
The automated procedure has not locked the account DEF. Why?
Actually, the user DEF still has some segments in the DATA tablespace. Hence, the procedure cannot be sure what was the original intention: copy the user ABC ? The clone procedure allows only to specify the tablespaces, so this is the only possible result.
This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish.Accept
Privacy & Cookies Policy
Privacy Overview
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.