ORACLE 12C... Begins with this
If you want to be with generation then you must upgrade yourself with as fast as technology going forward. We now mostly focus on Database admins as this blogs around with that, so DBA must upgrade themselves with each update comes in market to full-fill the clients requirement on updated servers software’s. From the Administration perspective on oracle 12c, there is a huge change in the Oracle DBA part. They just reincarnated the whole architecture of oracle database with new and comes with important updates and few new features in it. They mostly focus on cloud computing, security, table recovery in RMAN, smart flash cache, performance improvement, scalability, indexing, query optimization, SQL query plan management and so many to count in about 500+ new features, we will try to get few highlighted from those to look over few of those enlist below…
Container and Pluggable Databases
It is a noteworthy plan change on how Oracle manages databases and the fundamental design. This is the most discussed highlight in Oracle 12c. Prophet presents the idea of Container databases and Pluggable databases. The container database has all the metadata which will be inherited by the core Oracle components when it is installed. Pluggable databases will store user’s data and other pertinent metadata.
Apart from this obvious advantage of better resource utilization there are also numerous other advantages. Pluggable databases will be easy to back-up and move to another system. Patching or upgrading the Container database will automatically update all the pluggable databases. Another major advantage is having a single point of administration. From the development perspective developers will not have to worry about implementing Multi-tenancy into their application code. Rather the simple process of creating a separate database for every different potential tenant will do the trick more easily and more efficiently. Separation of these Plugin databases will also help satisfy Security concerns for some applications.
Recover a table with RMAN
Before oracle 12c, recover a table from loss or corruption was a difficult affair.
Most of the time you notice that the table is lost or corrupted too late to use the flashback technology and then the only way to go is to duplicate your database to a time the table was still valid, export this table and reimport it in your production database. That was a long drawn out affair.
One of the good new features in Oracle 12c is the ability to restore a single table or a single partition of a partitioned table from an RMAN backup via the RECOVER TABLE command. Prior to 12c restoring a table was a long drawn out and difficult affair. It creates an auxiliary database or instance which is used to recover the tables to a specific point in time. This database will contain a few system related data files like SYSTEM, SYSAUX, UNDO and data files belonging to the tablespace containing the tables we are looking to restore. Then it creates a Data Pump export dump file which will contain the recovered table or partitions of tables. It will then import the data into the target database using Data Pump Import. Finally it will remove the temporary auxiliary instance.
Online rename and relocation of an active data
Prior to Oracle 12c, moving datafiles has always been an offline task. There were certain techniques you could employ to minimize that downtime, but you couldn't remove it completely. Oracle 12c includes an enhancement to the
ALTER DATABASEcommand to allow datafiles to be moved online.
The text description of the syntax is shown below, but the syntax diagrams and a full description of the
ALTER DATABASEcommand needs more space than this blog.
ALTER DATABASE MOVE DATAFILE
[ TO ( 'filename' | 'ASM_filename' )[ REUSE][KEEP]
The source file can be specified using the file number or name, while the destination file must be specified by the file name. The
REUSEkeyword indicates the new file should be created even if it already exists. The
KEEPkeyword indicates the original copy of the datafile should be retained. When the source file is an OMF file the
KEEPoption cannot be used. If the destination file is an OMF file, the
TOclause can be omitted and the file will be created with an OMF name in the
Using online relocation of data files Migrate a data file from non-ASM to ASM:
Renaming and Relocating Online Data Files is available from 12c and enables us to rename or relocate data file while the database is open and users are accessing data files. This simplifies datafile management and avoids downtimes even when moving SYSTEM or UNDO tablespaces.
Create script to move data files in to ASM and execute it just doing that will bring Data files migrated to ASM without any downtime. The tricky part is to move temp file from tablespace TEMP in Container seed used for creation of pluggable databases. Data files and temporary files are now migrated into ASM then Move redo log files into ASM Put parameter file into ASM also and start database.
Data Pump enhancements
If you are a DBA, you are going to spend a lot of time with Data Pump. All roads seem to lead back to it. There are some more headline worthy features, like transportable database, but the two that jumped out at me were actually pretty small, but awesome.
Turn off redo log generation
The new TRANSFORM parameter option introduced data pumps import gives the adaptability to turn off the redo generation for the objects during the course of import. When DISABLE_ARCHIVE_LOGGING values is specified with the TRANSFORM option, redo generation for the objects in the context will be turned off during the entire import duration. This feature provides a great relief when importing large tables, and reduces the excessive redo generation, which results in quicker imports. This attribute applies to tables and indexes.
This example demonstrates this feature
$ ./impdp directory=dpump dumpfile=abcd.dmp logfile=abcd.log
“TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y” – Switches table and/or index imports to NOLOGGING for the lifespan of the import operation.
“LOGTIME=ALL” – Puts a timestamp in the output message so you can see how long individual operations took.
Transport view as table
This is another improvement in the data pumps. With the new VIEWS_AS_TABLES option, you can unload the view data into a table. The following example describes how to unload views data into a table during export:
$ expdp directory=dpump dumpfile=abcd.dmp logfile=abcd.log