إضغط لتفاصيل الإعلانات



Results 1 to 6 of 6
Share
  1. #1
    Join Date
    Nov 2007
    Location
    Arab world!
    Posts
    6,169
    Blog Entries
    4
    Rep Power
    10


  2. Facebook Comments - تعليقـك على الفيس بوك يسعدنا ويطور مجهوداتنـا


  3. Forum Ads:

  4. Forum Ads:

    اضفط هنا لمعرفة تفاصيل الإعلانات بالموقع


  5. Forum Ads:

    -->

  6. #2
    Join Date
    Feb 2008
    Location
    Alexandria,Egypt
    Posts
    370
    Rep Power
    13

  7. #3

    Default

    شكرا على المجهود
    بس ياريت تراجع على الرابط
    لأنى محتاجه ضرورى
    شكرا

  8. Forum Ads:

  9. #4
    Join Date
    Nov 2007
    Location
    Arab world!
    Posts
    6,169
    Blog Entries
    4
    Rep Power
    10

    Default

    I spent a big effort composing this document summarizing the new features for Oracle Database 11g.Useful when finding good reasons to propose a 11g migration to customer.Also useful for newbies when first approaching the many new 11g features.
    Active Database Duplication or Network-Aware database duplication
    No need to backup for duplication. Doesn't copy and apply online redo log, only up to most recent archive. Both password file required with same password.
    Flash Recovery Area is not copied. Archive log are copied if necessary.
    Source may be in mount or open(archivelog mode). Can automatically copy if specify the clause.
    duplicate database to dupdb from active database db_file_name_convert '/u01', '/v01' SPFILE parameter_value_convert '/u01', '/v01'
    set log_file_name_convert '/u01', '/v01' SET sga_max_size '3000m' set sga_target '2000m'
    SET=> SET are processed later, does not specify db_file_name_convert
    SPFILE => SPFILE must be in use in source instance, it will be copied on destination and used to start auxiliary instance. Destination database must be
    started using a text init.ora containing only DB_NAME parameter.
    db_file_name_convert=> replace pattern for datafiles and tempfiles
    NOFILENAMECHECK=> databases are on different hosts, no need to check for different filenames
    PASSWORD FILE=> if specified will copy source password file to destination
    1)-Create an init.ora for auxiliary instance
    2)-Create a passwordfile for auxiliary instance with same SYSDBA password as the source(target)
    3)-startup auxiliary instance using startup nomount
    4)-Connect RMAN to target and to auxiliary instance, and issue the active database duplication command

    Parallel Backup or Multisection backup
    Parallelize datafiles backup up to 265 parts(manually customizable for each datafile), multiple channel recommended.
    All section have same size may be except the last. SECTION SIZE mandatory and value must be specified. Can't use MAXPIECESIZE with SECTION SIZE.
    SECTION SIZE may be specified using VALIDATE DATAFILE command. Each section is a backup piece in the backup set.
    backup SECTION SIZE 200m tablespace pippo;
    * Views *
    V$BACKUP_DATAFILE and RC_BACKUP_DATAFILE (piece, section_size, multi_section[yes, no])

    Archival(Long-Term) Backup - KEEP option
    Are backed up control file, spfile, data and archived redo logs.
    LOGS/NOLOGS clause can't be specified with KEEP. Self-contained archival backup immune from any retention. Redo logs are switched, current needed.
    Store archival backup in non flash recovery area, may fill up quickly. Recovery catalog required if using KEEP FOREVER.
    CHANGE not allowed for backup in flash recovery area.
    An archived log is never made obsolete if needed for a guaranteed restore point, may be obsolete if needed for a normal restore point.
    backup database format 'c:\arch\db_%U.bkp' tag quarterly KEEP until time 'sysdate + 365' RESTORE POINT firstq08;
    #KEEP=>[FOREVER, UNTIL TIME], recovery catalog required if using KEEP FOREVER.
    #RESTORE POINT=>SCN is captured when backup finish. The time until which RMAN can restore an archival backup
    CHANGE backup tag 'pippo_bck' forever;
    change copy of database controlfile keep;

    Fast Incremental Backup
    Flashback Database feature is required, searches flashback logs before backup files. Only corrupted blocks are restored and recovered in a datafile.
    Archivelog mode always required. Db mounted or open. No proxy copies allowed. Block change tracking may be enabled on physical standby db.
    10g blockrecover(no more available)-> 11g recover...block
    recover datafile 2 block24 datafile 10 block 21 from tag=sundaynight;
    RMAN>recover corruption list;
    #recover all corrupted blocks from listed in V$DATABASE_BLOCK_CORRUPTION, view data is then deleted
    * Identified Corrupt Blocks *
    analyze table, analyze index, list failure, validate(populates V$DATABASE_BLOCK_CORRUPTION), backup...validate, export to /dev/null, dbverify
    * Views *
    V$DATABASE_BLOCK_CORRUPTION list block corrupted, populated by VALIDATE DATABASE

    RMAN Compression
    BZIP2(good compression) and ZLIB(def. faster 45%)
    RMAN>configure compression algorithm 'bzip2';

    Archived Redo Log Deletion
    configure archivelog deletion policy=NONE(def.)(applies to all archived logs stored on disk including Flash Recvoery area)(Archived log will be marked
    for deletion after transfered to all log_archive_dest_n and backed up or be obsolete)
    configure archivelog deletion policy to backed up 2 times to sbt

    RMAN Catalog merging - Export/Import RMAN catalog
    source will be deleted and deregistered(unless you use ), source db version same as RMAN client version,
    fails if same db is already registered in destination
    import catalog rman/rman@reporman dbid=122,123 db_name=pippo,pluto dbid=121,31;
    #you are connected to destination, rman@reporman will be imported

    Virtual private catalog or Virtual catalog
    1) SYSDBA> create the db user
    2) SYSDBA> grant recovery_catalog_owner to user
    3) real CATALOG OWNER> grant [REGISTER_DATABASE | REGISTER] to user;
    #permits to register unknown databases, does not grant recovery catalog access
    and / or
    real CATALOG OWNER>grant catalog for database t1 to user;
    #grant access to a target existing database to a user even if db is not yet registered, use DBID if db not yet registered.
    #User may un/register databases. User creates local scripts and read global scripts
    5) VIRTUAL USER - RMAN> create virtual catalog;
    #use for 11g or later database
    or
    VIRTUAL USER - RMAN> DBMS_RCVCAT.create_virtual_catalog
    #use for 10g database
    6) VIRTUAL USER - RMAN> register database;
    RMAN>revoke all from privileges from user;
    * Packages *
    DBMS_RCVCAT.create_virtual_catalog
    #must execute before working with any pre-11.1 database. Does not create a virtual private catalog

    Data Recovery Advisor



    <- RMAN, EM Support Workbench & Database Control, Healt Monitor(following errors), VALIDATE or BACKUP command Quickly detect and analyzes data failures, repair manual(mandatory, optional) or automatic, data loss or no Automatically repair some network I/O errors, control file and datafile inconstistencies, accidental renaming of datafiles. No RAC. No standby database. No ORA-600, must be in Healt Monitor status(open, closed), priority(critical, high, low), grouping 1) list failure #failures are listed by priority [1:CRITICAL, 2:HIGH, 3:LOW], failures with same priority are ordered cronologically 2) advise failure #generate manual and automated options for repair, rman scripts are generated for automated options 3) repair failure #by default high and critical failures will be repaired and closed. Generates warning if new failures are detected since last May not automatically close a fixed failure * Views * V$IR_FAILURE or list failure; change failure 7 closed; change failure priority low; #failures are listed by priority [1:CRITICAL, 2:HIGH, 3:LOW], failures with same priority are ordered cronologically V$IR_MANUAL_CHECKLIST or advise failure; #summary and close fixed V$IR_REPAIR or repair failure; #requires advise failure validate database; #checks only for intrablock(no interblock) corruption in all datafiles, control files and the server parameter file validate database; #proactive checkHealt Monitor

    -> run Data Recovery Advisor checks View Healt Monitor checker reports using: EM, ADRCI, DBMS_HM, V$HM_RUN Healt check mode can be: [Reactive]: Healt check ran automatically when a critical error occurred [Manual] Available checks: [Data Block Integrity] : check disk image block corruptions, checksum failures and intrablock corruptions. Not check for interblock corruption [Db Structure Integrity]: verify accessibility, consistency and corruption of all database files [Undo Segment Integrity]: check for logical Undo corruptions and attempt to repair them [Redo Integrity check], [Dictionary Integrity check] * Packages * DBMS_HM.run_check(check_name, run_name, input_params) DBMS_HM.(GET)RUN_REPORT(run_name M, type O , level O ) * Views * V$HM_CHECK(all possible checks), V$HM_RUN(view HM checker reports), V$HM_CHECK_PARAM, V$HM_FINDING, V$HM_RECOMMENDATION show hm_run; #show all checksPartitioning

    Interval partitioning The high value of the range partitions is called the Transition Point Extends range partitioning by automatically creating new partitions on new data insert out of bounds. Only a single column DATE or NUMBER. Works on index-organized tables. You can't create domain indexes create table pippo(time_id date) partition by range(time_id) interval(numtoyminterval(1, 'month')) (partition p0 values less than to_date('3/12/1976', 'dd/mm/yyyy')), (partition p1 values less than to_date('3/01/1977', 'dd/mm/yyyy')) alter table pippo merge partitions for(to_date('3/12/1976', 'dd/mm/yyyy')), to_date('3/01/1977', 'dd/mm/yyyy'))) #the partitions to merge must be adiacent * Views * DBA_TAB_PARTITIONS(partitioning_type[range, interval(new automatically created)]) System partitioning The application must explicitly name the partition in the insert/merge statements, not required for update/delete. Can't create as CTAS, Hurts performances. No keys. Partitions can have different physics attributes. Can't create unique local indexes. Can't split partitions create table pippo(c1 number) partition by system (partition p1, partition p2) insert into pippo partition(pippo) values(1) alter table pippo merge partitions p1, p2 into partition p1 Virtual Column-Based partitioning Virtual column is computed on the fly. Datatype is optional. Can collect optimizer statistics. You can index a virtual column. Can use a virtual column in all dml/ddl. Can partition over a virtual column. Can create virtual columns only on normal heap table. Can't refers another virtual column. Output must always be scalar no lob, long, raw, ... Can't use pl/sql function if used for partitioning create table pippo (c1 number, c2 number generated always as (c1*2)) Reference partitioning Master table is a partitioned table. Can't use interval partitioning. Requires foreign key relationship. If no tablespace specified then partitions are in same tbs of parent table's partition. Can't specify partitions bounds. Partition names must be unique between master and reference table. Can't disable the foreign key. Can't add or drop partitions from child table while its possibly for parent table. Pruning and partitionwise joins works even if query predicates are different from the partitioning key Composite Partitioning Top Level partitioning: Range, List, Hash Range-list, Range-hash, List-list, list-hash, list-range, range-range, interval-range, interval-list, interval-hashASM Automatic Storage Management

    Disk Group, Striping used 128Kb, automatic rebalance + Failure group, only for Normal o High Redundancy disk group + Disk + ASM File, max size 140 petabyes(11g), 35 terabytes(10g) + File Extent(1,4,16,32,64Mb), resides on single disk, a small extent extend to next size before creating the next extent reducing shared pool + AU Allocation Unit #basic allocation unit #[compatible.rdbms=11.1 size may be (1,2,4,8,16,32,64Mb)] #[compatible.rdbms=10.2 size may be (1,2,4,8Mb)] Db will automatically defragment on problems allocating/extending extents. Manual disk group rebalance avoid external fragmentation Fast Mirror Resync Requires compatible.rdbms=11.1 DISK_REPAIR_TIME(def. 3.6h) enable Fast Mirror Resync, if the disk is offline is not dropped until value specified. Default unit is hours. alter diskgroup dgroupA set attribute 'disk_reair_time'="2D6H30M" #change is effective only for online disks of the group If no disk content is damaged or modified, temporary failure, resynchronize only changed extents 10g: 1) Take disk offline: alter disk group dgroup1 disk data_00001 drop after 0 h; # override disk_repair_time 2) Wipe out the disk headers: dd if=/dev/zero of=asm_disk1 bs=1024 count=100 3) Add disk back to group: alter diskgroup dgroup1 add disk '/dev/raw/raw1' size 100M; 11g: alter diskgroup dA online disks all; #takes online all disks for given diskgroup alter diskgroup dgroupA online; #Disk opened write only, stale extents only are copied, disk read/write alter diskgroup dA offline disks in failuregroup f1 DROP AFTER 4h; # override disk_repair_time. Keep disk offline not dropping for until the time specified. Remember that a disk group may have more failuregroups alter diskgroup dA online disks in failuregroup f1 POWER 2 WAIT; #wait 2 hours before bring dA online again. Remember that a disk group may have more failuregroups alter diskgroup dA drop disks in failuregroup f1 FORCE; #use to drop a disk group that you are unable to repair. Remember that a disk group may have more failuregroups* Views * V$ASM_DISK(preferred_read[y]), V$ASM_DISK_IOSTAT, V$ASM_OPERATION(operation=sync), V$ASM_DISKGROUP(name, allocation, unit_size, compatibility, database_compatibility), V$PWFILE_USERS(sysasm) V$ASM_ATTRIBUTE #requires compatible.asm=11.1* Mirroring * 2 way, 3 way , external redundancy Normal redundancy: 2 failure groups, 2 way mirroring, all local disk belong to same failure group, only 1 preferred failure group for group High redundancy: 3 failure groups, 3 way mirroring, maximum of 2 failure groups for site with local disks,up to 2 preferred failure group for group External redundancy: No failure groups * ASM Preferred Mirror Read * Requires compatible.rdbms=11.1 Once you configure a preferred mirror read (see asm_preferred_read_failure_groups), every node can read from its local disks, only local create diskgroup dg6 external redundancy disk '/dev/raw/raw1' attribute 'au_size'='8M'; attribute 'compatible.asm'='11.1';* OS User * SYSASM instead of SYSDBA, member of OSASM group grant sysasm to aldo; Variable Size Extents The extent size is automatically increased based on file size. Extent size can vary in a single file or across files. No manual configuration required. Performances increased when opening files. Less memory to manage the extent map. Fewer extent pointers required Compatibility Params Compatibility can only be advanced ASM 11g supports both 11g and 10g, compatible.asm and compatible.rdbms must be manually advanced since default values are 10.1 * Attributes * compatible.rdbms #Default 10.1. The minimum db version to mount a disk group, once increased cannot be lowered. Must be advanced after advancing compatible.asm #11.1 enable ASM Preferred Mirror Read, Fast Mirror Resync, Variable Size Extents, different Allocation Unit sizes(see AU Allocation Uint) compatible.asm #Default 10.1. Control ASM data structure, cannot be lower than compatible.rdbms. #Must be advanced before advancing compatible.rdbmstemplate.redundancy: unprotect, mirror, hightemplate.tname.striping: coarse, fine* Check command * Verify ASM disk group metadata directories, cross check files extent maps and allocation tables, check link between metadata directory and file directory, check the link of the alias directory, check for unreachable blocks on metadata directories, repair[def.]/norepair, disk consistency if verified 10g: check all, file, disk, disks in failgroup; -> 11g: check; Mount alter diskgroup t dismount; alter diskgroup t mount RESTRICT; or startup retrict; #clients wont be able to access disk group, if you add a disk a rebalance is performed alter diskgroup t dismount; alter diskgroup t mount [NOFORCE(def.) | FORCE]; #NOFORCE wont mount an incomplete disk group. #FORCE you must restore missing disk before disk_repair_time, FORCE requires at least one disk offline, FORCE fails if all disk are online drop diskgroup g1 force include contents; #Command fail if disk in use, must specify with ASMCMD cp +DATA/.../TBSFV.223.333 +DATA/.../pippo.bak #copy a file locally cp +DATA/.../TBSFV.223.333 /home/.../pippo.bak #copy a file to the OS and viceversa cp +DATA/.../TBSFV.223.333 +DATA/.../pippo.bak \sys@mydb . +ASM2 : +D2/jj/.../pippo.dbf #Copy to a remote ASM instance lsdsk <-d><-i><-[l]k><-[l]s><-p>; #list visible disk. In connected mode(default) reads V$... and GV$..., in non-connected scans disk headers after a warning message. <-I> force non-connected mode <-k> detailed infos <-s> shows I/O stats <-p> status <-t> repair related infos <-d> limits to disk group read from headers remap dg5 d1 5000-7500; #remap a range of unreadable bad disk sectors with correct content. Repair blocks that have I/O errors. EM may also be used md_backup [-b backup_file(def. ambr_backup_intermediate_file)] [-g 'diskgroup_name,diskgroup_name,...']; #backup into a text file metadata infos mkdir +DGROUP1/abc mkalias TBSF.23.1222 +DGROUP1/abc/users.dbfMD_RESTORE command recreate diskgroups and restore its metadata only from the previously backed up file. Cannot recover corrupted data md_restore [-b backup_file(def. ambr_backup_intermediate_file)] <-t[FULL(create diskgroups and restore its metadata), NODG(restore metadata for an existing diskgroup), NEWDG(new diskgroup and restore metadata)]>; <-f> write commands to a file <-g> select diskgroups, all if undefined <-o> rename diskgroup <-i> ignore errors md_restore -t newdg -o 'DGNAME=dg3:dg4' -b your_file #restore dg3 giving a different name dg4Directory structure



    $ORACLE_BASE (recommended only) $ORACLE_HOME (do not explicitly set) dbs oranfstab sfirst default location Flash recovery area (different disk then datafiles) Datafiles (different disk then Flash Recovery Area) Inventory /diag Advanced Diagnostic Repository (ADR base=diagnostic_dest multiple ADR homes allowed but only one ADR base) Db name Instance name (use v$DIAG_INFO to view these paths) ADR_HOME (multiple ADR homes allowed but only one ADR base) alert (xml alert format) hm (checker reports) incident trace(V$DIAG.name='Diag Trace') text alert.log background process trace files foreground process trace files wallet ewallet.p12 cdump bdump udump ...init.ora new params

    # A S Masm_preferred_read_failure_groups=data.locationA, data.locationB #Dynamic. disk_group_name.failure_group only disks local to the instance!!asm_diskgroupsasm_diskstring# M E M O R Ysga_target=0 #Static. Enable Automatic Memory Management pga_aggregate_target=0 #Static. Enable Automatic Memory Management memory_target=900M #dynamic default 0 and when upgrading. Total memory for SGA and PGA. With sga_target=0 and pga_target=0 then defaults 60% to SGA and 40% to PGA #With sga_target!=0 and pga_target!=0 they indicate minimum valuesmemory_max_target=2000M; #if unset defaults to memory_target(if greater than 0) or 0(if memory_target is not set)# R E S U L T C A C H Eresult_cache_max_size #Static. Set the high limit. 0 disable. Maximum is 0.25% of memory_target, 0.5% of sga_target, 1%><75% of shared pool.result_cache_max_result #Def. 5, max 100. Max % of result cache for a single cached resultresult_cache_remote_expiration #Def. 0. Time for a remote cached result to be valid. Positive values may lead to uncorrect resultsresult_cache_mode MANUAL(def.), FORCE #Dynamic. MANUAL works only with /*+ result_cache*/ hint on SQL. Same behaviour for client cache. #FORCE cache all unless /*+ no_result_cache */client_result_cache_size #Def. 0 disabled. Static. Size of client cache in bytes. Overriden by oci_result_cache_max_sizeclient_result_cache_lag #Set a low value for unfrequent accesses # client sqlnet.ora only #oci_result_cache_max_size #Overrides client_result_cache_size #oci_result_cache_max_rset_size #set max size in bytes of a single SQL result for a process #oci_result_cache_max_rset_rows #set max size in rows of a single SQL result for a process# P L / S Q Lplsql_optimize_level=3 #default is 2 recommended 3. 2 is the minimum level to enable native compilationplsql_code_type=INTERPRETED(def.), NATIVE #dynamic. INTERPRETED: Pl/sql is precompiled to Pl/sql bytecode using C compiler. #NATIVE db compile Pl/sql to machine code, no need of interpreter, DLL are stored and loaded in db catalog, no filesystem involved# S E C U R I T Ysec_case_sensitive_logon=TRUE(def.), FALSE - case sensitive passwordssec_max_failed_login_attempts=5 #after 5 attempts the connection will be droppedsec_protocol_error_further_action=[CONTINUE | DROP | DELAY], 5 #CONTINUE allow the connection to continue DROP terminate the client connection after 5 attempts #DELAY delay client connection for 5 secondssec_protocol_error_trace_action=none, trace, log, alert #TRACE log to user trace, LOG log to alert.logaudit_trail=default is DB #Audit is enabled by default and stored in AUD$ldap_directory_sysauth= #YES: if a password file is configured then it will be checked before other strong authentications methods. # O P T I M I Z E Roptimizer_capture_sql_plan_baselines=FALSE(def.), TRUE #db create and maintains a plan history for repeatable SQL, dynamicoptimizer_use_sql_baselines=FALSE(def.) #TRUE plan history is automatically captured and mantained for repeated SQL statements.optimizer_use_private_statisticsoptimiz er_use_pending_statistics=FALSE(def.) #Dynamic. Alter session to test pending statistics.optimizer_use_invisible_indexes=FALSE(d ef.) #dynamic. TRUE all invisible indexes are treated as visibleoptimizer_features_enable #may force the optimizer to work as in 10g#control_management_pack_access=NONE, DIAGNOSTIC, DIAGNOSTIC+TUNING(def.)java_jit_enabled=true #default is true. Java code will be compiled natively with no C compilercontrol_management_pack_access=DIAGNOSTIC+ TUNING(def.), NONE(disable ADDM)memory_targetmemory_max_targetdiagnostic_dest = #ADR base (optional). Default diagnostic_dest=%ORACLE_BASE if %ORACLE_BASE defined else $ORACLE_HOME/log, many homes allowedddl_lock_timeout=0(def.) #dynamic. length of time for a ddl waiting a busy resource, max 1000000(11.5 dd)db_securefile=ALWAYS, FORCE, PERMITTED(def.), NEVER, IGNORE #dynamic. #ALWAYS: if tbs is enabled for ASSM Automatic Segment Space Management creates all lob as SecureFile otherwise creates BasicFiles. #FORCE create all lob as SecureFiles regardless of the tablespace ASSM. #PERMITTED SecureFiles creation is allowed. #NEVER SecureFile creation is never allowed. #IGNORE SecureFile creation is not allowed and ignores errors creating BasicFiles with SecureFile optionsdb_ultra_safe=OFF(1:OFF 2:TYPICAL 3:TYPICAL), DATA_ONLY(1:MEDIUM 2:FULL 3:TYPPICAL), DATA_AND_INDEX(1:FULL 2:FULL 3:TYPICAL) 1 db_block_checking=TYPICAL, OFF or FALSE, MEDIUM, FULL or TRUE 2 db_block_checksum=FALSE(def.), FULL(recomm.) 3 db_lost_write_protect=TYPICALstatistic_level=[TYPICAL | ALL | BASIC] #BASIC will disable AWR providing SQL sources to Automatic SQL Tuning Advisor and ADDMfilesystemio_options=[ASYNCH | SETALL] #ASYNCH or SETALL allows I/O calibrationtimed_statistics=true #TRUE allows I/O calibrationsqlnet.ora new param encryption_wallet_location=(source=(method=file)(m ethod_data=(directory=/yourdir)))Database Replay

    Useful for debugging, upgrading the OS, converting from a single instance to RAC Captures user switching sessions, logins, all SQL(text, context and frequency), DML, DDL, session and system control calls(alter session..., alter system...) NOT captures flashback SQL, direct path loads, distributed transactions, background activities, scheduled jobs Steps 1) With SYSDBA or SYSOPER required, you should(but not required) restart the db in RESTRICTED mode before capture 2) Create a directory to store the capture files and ensure space is enough 3) capture and optionally define filters 4) This step is optional but recommended. You should recreate a test environment identical to what the capture was at the beginning and reset the system clock. 5) Move captured file to test environment 6) Preprocess captued workload, this only happens once and simply converts captured data into files that can be replayed 7) Remap connection strings and modify database links 8) Set up one or more clients, each client can control one or more workload sessions Upgrading 1)run @/app/oracle/product/11.1.0/db_1/rdbms/admin/utlu111i.sql(Pre-Upgrade Information Tool) from 10g database 1.5) startup upgrade# from 11g 2)@?/rdbms/admin/catupgrd.sql 3)@?/rdbms/admin/utlu111s.sql(post upgrade status) 4)@?/rdbms/admin/catuppst.sql(post upgrade actions) 5)@?/rdbms/admin/utlrp.sql[no params | 0 | 1 | n] #No params: parallel recompilation based on CPU_COUNT. 1: serial. n: parallel n 6)Enhance compatibility param * Avoid plan regression * Before upgrade capture plans for SQL Workload into an Sql Tuning Set(STS) After upgrade manually load the old plansADR Automatic Diagnostic Repository

    diagnostic_dest=ADR base (optional) default diagnostic_dest=%ORACLE_BASE if %ORACLE_BASE defined else $ORACLE_HOME/log, many homes allowed max 5 diagnostic data per hour per problem ADR Command Interpreter Many homes allowed concurrently, use SET HOMEPATH. Not all commands works with multiple homes, homepath is null by default, all homes are current by default view XML alert, show hm_run, create report hm_run , show report hm_run * Views * V$DIAG_INFO show all locations ADRCI - ADR Command Interface Used to view diagnostic data, view content of alert log file, package incidents and problem informations Incident Packaging Service <- EM Support Workbench(Support Workbench->Incident Package Configuration)(Collect diagnostic data, Generates incident reports and transmits), ADRCI Critical Error=Problem=ORA-04031, composed of many Incidents=ORA-04031 Incident(Collecting, Ready, Tracking, Closed, Data Purged) Retention: 1 year metadata(retention may be changed with EM), 1 month dumps. Incident is closed after twice retention elapses, is purged 30 days after open 1 Generate logical pack, 2 finalize, 3 gen. physical Incident thresholds cannot be modified. No edit and customizing when Quick Packaging Oracle Support Workbench may be enabled to automatically upload files to Oracle Support by providing infos during the installation of Oracle Configuration Manager Flood-controlled incidents are incidents reported multiple times. Alert log is updated and incident recorded in ADR but no dumps are generated.SQL Performance Analyzer

    1)First you capture SQL workload in a STS(SQL Tuning Set) 2)Move SQL workload to test environment 3)Execute the workload to capture baseline execution data 4)you make your changes 5)Execute the workload to capture execution data 6)Analyze differences in SQL performances 7)Tune regressed SQL statements Identify plan that might have regressed after a database upgrade, hardware reconfiguration, schema changes and any change may affect SQL. net SQL workload is identified. ?SQL are executed? Cannot use SQL Performance Analizer to identify sql statements that are part of the SQL plan baseline You can set a time limit within which all SQL statement in the workload must execute DML and DDL are NOT taken in consideration * Parameters * TIME_LIMIT parameter for all SQL LOCAL_TIME_LIMIT for single SQL EXECUTION_TYPE[EXPLAIN PLAN | TEST EXECUTE(def.)] #TEST EXECUTE: Each SQL Statement in the workload is executed one time and execution plans and statistics are collected.SQL Repair Advisor

    <- Support Workbench(Diagnostic Summary->Active Incidents link) (often provides patch to fix) sqlplan will change after patching. SQL Repair Advisor is not automatically launched after a SQL statement crash * Export patches * DBMS_SQLDIAG.pack_stgtab_sqlpatch(staging_table), export, unpack_stgtab_sqlpatch(staging_table) * Packages * DBMS_SQLDIAG.create_diagnosis_task(sql_text, task_name) DBMS_SQLDIAG.set_tuning_task_parameter(task_id, filter) DBMS_SQLDIAG.execute_diagnosis_task(task_name) DBMS_SQLDIAG.report_diagnosis_task(task_name, out_type) DBMS_SQLDIAG.accept_sql_patch(task_name, task_owner) * Views * DBA_SQL_PATCHES(recommended patches), drop_sql_patch(procedure)SQL Test Case Builder

    <- EM Support Workbench Theese informations are gathered for a problematic SQL statement: the text, involved table definitions, optimizer statistics. No dump files DBMS_SQLDIAG.export_sql_testcase_dir_by_incAutomatic Workload Repository

    <- EM(Adaptive Metric Threshold link -> Baseline Metric Thresholds) Requires AWR to have at least the same retention time. A baseline is any set ot snapshot taken over a period of time. Static baseline Repeating static baseline #Will be automatically created Moving window baseline #Will not be automatically created. SYSTEM_MOVING_WINDOW is system provided Retention raise from 7 days(10g) to 8 days(11g) for default baseline SYSTEM_MOVING_WINDOW containing Adaptive Threshold Thresholds types: Significance Level #if set to 0.99 alert on 1% change. Based on statistical relevance. Only Basic Metric support Significance level thresholds Percentage of maximum #calculated using the highest value captured by the baseline Fixed values When a baseline ends in the future a new baseline with the same name is created. * Package * DBMS_WORKLOAD.modify_baseline_window_size(wind_siz e) #to change def. retention, if increase then increase AWR retention to match time DBMS_WORKLOAD.create_baseline #should be used for past time periods DBMS_WORKLOAD.create_baseline_template single: (start_time, end_time, bas_name, templ_name, EXPIRATION(O)=>null, dbid(O)) #EXPIRATION if not specified or null than the baseline template created will never expire repeatingday_of_week, hour_in_day, duration, expiration(O)=>null, start_time, end_time, baseline_name_prefix, template_name, dbid(O)) #should be used when part or all of the time period is on the future DBMS_WORKLOAD.rename_baseline(old, new, dbid(O)) DBMS_WORKLOAD.modify_baseline_window_size DBMS_WORKLOAD.drop_baseline_template(templ_name, dbid(O)) DBMS_WORKLOAD.select_baseline_metrics(baseline_nam e)(display metric thresholds) DBMS_WORKLOAD_REPOSITORY... #resize AWR retention period * View * DBA_HIST_BASELINE_TEMPLATE (template_type[single | repeated], repeat_interval[timing string as for DBMS_SCHEDULER]) DBA_HIST_BASELINE_DETAILS (shutdown[yes | no | null], pct_total_time[total snapshot time / total possible time], error_count[errors during snapshot]) DBA_HIST_BASELINE (baseline_type[static | moving | window | generated], creation_type, expiration[how long keep the baseline], template_name, last_computed, moving_window_size[if baseline_type is null then AWR retention period else number of days of moving window])Controlling Automated Maintenace Tasks

    New ABP(Autotask Background Process) - converts automatic tasks into Scheduler jobs - determine the jobs that need to be created for each maintenance task - stores task execution history in the SYSAUX tablespace - does NOT execute maintenance tasks A task may be enabled/disabled and changed of resource percentage in one or all maintenance windows. May change duration of maintenance windows. Database Resource Manager is automatically enabled during maintenance window By default AutoTask schedules: optimizer statistics collection, SQL Tuning Advisor and Automated Segment Advisor. * MAINTENANCE_WINDOW_GROUP * DEFAULT_MAINTENANCE_PLAN(belongs to a resource allocation) 10g: weeknight_window, weekend_window 11g: [10PM-02AM, 4h]: monday_window, tuesday_window, wednesday_window, thursday_window, friday_window. [06AM-02AM, 20h]saturday_window, sunday_window * Managing Automatic Maintenance Tasks * <- EM 10g: DBMS_SCHEDULER only enable or disable automatic maintenance tasks 11g: DBMS_AUTO_TASK_ADMIN more fine grained control DBMS_AUTO_TASK_ADMIN.enable(client_name, operation, window_name) DBMS_AUTO_TASK_ADMIN.disable(client_name, operation, window_name) DBA_AUTOTASK_OPERATION lists only automatic task(clients) operations names DBA_AUTOTASK_CLIENT DBA_AUTOTASK_TASK #view ABP(Autotask Background Process) repository, stores execution history for automated maintenance tasks I/O Calibration <- EM(Database Control -> Performance -> I/O Calibration) Requires 10 minutes to run, sysdba, filesystemio_options=[ASYNCH | SETALL], timed_statistics=true recommended during low activity, no concurrent calibrations allowed DBMS_RESOURCE_MANAGER.calibrate_io(num_disks=>1, max_latency=>10, max_iops, max_mbps, actual_latency) V$IO_CALIBRATION_STATUS, DBA_RSC_IO_CALIBRATE, V$IOSTAT_FILE (all show calibration results)Database Resource Manager

    Automatic actions based on current session single call usage: switch of Resource Consumer Group(switch grant needed), session kill, statement kill Database Resource Manager is automatically enabled during maintenance window CREATE_RESOURCE_PLAN(switch_group[null | CANCEL_SQL | KILL_SESSION | a group], switch_time[UNLIMITED], switch_estimate[false | true], switch_io_megabytes[null | mb], witch_io_reqs[null | io#], switch_for_call, switch_for_call[null | true]) CREATE_PLAN_DIRECTIVE(plan, group_or_subplan, mgmt_p1, switch_group[null | CANCEL_SQL | KILL_SESSION | a group], switch_io_reqs, switch_io_megabytes, switch_for_call, switch_time) MIXED_WORKLOAD_PLAN predefined resource plan Level 1: 100% sys_group Level 2: [85% interactive_group], [15% ORA$AUTOTASK_SUB_PLAN, ORA$DIAGNOSTIC, OTHER_GROUPS] Level 3: batch_groupOracle Scheduler

    Minimum amount of metadata required for creation, recommended to create a big number of job Create Lightweight jobs using: job array, named programs, schedules, DBMS_SCHEDULER.create_job Lightweight jobs are created only from job templates(stored proc or Scheduler program), privileges are only inherited from parent job, may discriminate DataGuard nodes * Package * DBMS_SCHEDULER.create_program(program_name, program_action, program_type[plsql_block | stored_procedure], enabled) DBMS_SCHEDULER.create_job(job_name, program_name, repeat_interval(opt.), end_time(opt.), schedule_name(opt.), job_style[LIGHTWEIGHT], comments) * Create a Job Array and submit using single transaction * declare testjob sys.job; testjobarr sys.job_array; begin testjobarr:=sys.job_array(); testjobarr.extend(500); testjob:=sys.job(...); testjobarr(i):=testjob; dbms_scheduler.create_jobs(testjobarr, 'transactional'); end; * Remote External Jobs * On source db: XML DB(default), run as SYS prvtrsch.plb, DBMS_SCHEDULER.set_agent_registration_pass(registr ation_password, expiration_date(O), max_uses(O)) On Dest host: install Scheduler agent(not default), configure schagent.conf, schagent.exe -registerdatabase hostdb 1521, schagent.exe -start * Enable a remote job * DBMS_SCHEDULER.create_credential(cred_name, username, pwd) DBMS_SCHEDULER.create_job(...) DBMS_SCHEDULER.set_attribute(job_name, 'credential_name', cred_name) DBMS_SCHEDULER.set_attribute(job_name, 'destination', hostort), .enable(job_name)Security Enhancements

    sec* parameters Pwd are case sensitive(default when NOT upgrading, reset the pwd otherwise), delay after pwd wrong up to 10 secs orapwd file=... entries=... ignorecase=n(def.) alter profile ... password_verify_function verify_function_11g utlpwdmg.sql enforce pwd verification OS authentication is checked before the password file Authentication using OID can be configured by granting SYSOPER/SYSDBA enterprise role in OID Administrators can be authenticated using local OS if they are in the OSDBA or OSPER group * ACL Fine Grained Access to network system supplied packages* An ACL is used to restrict access to certain hosts is stored in XML format. Connect comprise resove. an ACL Access Control List is a users/privilege list, only one ACL per host DBMS_NETWORK_ACL_ADMIN.create_acl(acl[name of XML DB priv file], description, principal, is_grant, privilege[connect | resolve]) #one privilege must be specified on creation DBMS_NETWORK_ACL_ADMIN.add_privilege(acl[name of XML DB priv file], principal, is_grant, privilege[connect | resolve]) DBMS_NETWORK_ACL_ADMIN.assign_acl(acl[name of XML DB priv file], host[ex. *.fadalti.com], lower_port, upper_port) DBMS_NETWORK_ACL_ADMIN.check_privilege(acl[name of XML DB priv file], username, privilege[connect | resolve]) Oracle Wallet <- Oracle Wallet Manager(owm.exe) see sqlnet.ora for location. Encryption Wallet(manually open after db startup) and Auto-Open Wallet Wallet required, configured and opened to encrypt tablespaces, [AES128, 3DES168, AES128, AES192, AES256], I/O overhead Attempting to create a table in an ecrypted tablespace with the wallet closed raise an error. You cannot change the key of an encrypted tablespace. A normal tablespace cannot be converted to an encrypted tablespace. Creation method 1: mkstore -wrl $ORACLE_BASE/admin/$ORACLE_SID/wallet -create Creation method 2: alter system set encryption key identified by "password" create tablespace pippo datafile 'c:\pippo.dbf' size 100m encryption using 'AES128' default storage(encrypt)Automatic SQL Tuning Advisor

    <- EM statistic_level=TYPICAL, BASIC will disable AWR providing SQL sources. AWR retention less than 7dd disable ASTA. STA may evolves SQL Plans from non-accepted to accepted 10g Automatic Tuning Optimizer(never auto tune) ->upgraded-> 11g Automatic SQL Tuning Advisor job name SYS_AUTO_SQL_TUNING_TASK runs for 1 hour, runs only once during nightly maintenace window as part of MAINTENANCE_WINDOW_GROUP Statistical analysis, SQL profiling, Access path analysis(add indexes), SQL structure analysis(Modifying SQL statements) STA test and automaticall implement only SQL profiles creations with at least 66% of benefit, choses statements from AWR Top SQL with poor performance in the past week or you manually run providing Sql Tuning Sets. If run in Tuning Mode then the optimizer might run statistics and store in SQL Profiles * Limitations * No recursive SQL, no ad hoc SQL, no parallel SQL, no long run profiled SQL, no insert/delete, no ddl * Report * Show if statisticd need to be collected, profile was recommended, index recommendation. Show a benefit percentage of implementing recommendation for SQL profile and indexes * Package * DBMS_SQLTUNE.set_tuning_task_parameter(... see params below ...) #for automatic tuning only, no manual task_name=>'SYS_AUTO_SQL_TUNING_PROG' parameter=>accept_sql_profile: When TRUE auto accept SQL profiles. Profiles are always generated parameter=>replace_user_sql_profiles: allowed to replace user created SQL profiles parameter=>max_sql_profiles_per_exec: maximum number of allowed SQL profiles changes accepted per tuning task parameter=>max_auto_sql_profiles: maximum total number of allowed SQL profiles changes by db parameter=>execution_days_to_expire: default 30, task history expiration parameter=>time_limit: to change default duration of 1 hour in seconds parameter=>test_execute: only use plan costs DBMS_SQLTUNE.report_auto_tuning_task(type[text], level[typical], section[all]) return varchar2 DBMS_AUTO_TASK_ADMIN.enable(client_name=>'sql tuning advisor', operation=>'NULL', window_name=>['NULL'(all) | (a name)]) DBMS_AUTO_TASK_ADMIN.disable #disable the Automatic Sql Tuning process * View * DBA_SQL_PROFILES[type], if type=AUTO then auto implemented SQL profile DBA_ADVISOR_EXECUTIONS shows metadata information DBA_ADVISOR_SQLSTATS shows a list of all SQL compilation and execution statistics DBA_ADVISOR_SQLPLANS a list of all SQL execution plansSQL Plan Management

    Preserve SQL performance across major system changes A SQL Plan may be: non accepted #may be evolved to accepted accepted #required for use, all manually loaded enabled #required for use fixed #A fixed plan has precedence over a non-fixed even if with higher cost, is FIXED if at least an enabled plan inside is FIXED=yes, #the optimizer will not add new plans. If you accept a profile recommended by STA that profile will not be FIXED SQL Tuning Advisor may evolve plans from non-accepted to accepted. * Plan history * Contains both accepted and not-accepted plans for a repeatable SQL, best plan not accepted yet in SQL plan baseline * SQL Plan Baseline * Contains plan history but only accepted Automatic plan capture: see optimizer_capture_sql_plan_baselines parameter Manual plan capture: performances are not verified * Package * DBMS_SQLTUNE.create_sqlset(sqlset_name, description) creates an empty SQL set select value(p) from table(DBMS_SQLTUNE.select_workload_repository('pea k baseline', null, null, 'elapsed time', null, null, null, 20)) p DBMS_SQLSET.load_sqlset(sqlset_name, populate_cursor ) DBMS_SPM.load_plans_from_sqlset(sqlset_name, fixed=>no, enabled=>yes) #plans are loaded into the SQL plan baseline as non-fixed accepted and enabled plans. May be used DBMS_SPM.load_plans_from_cursor_cache(sql_id, plan_hash_value[null ], sql_text, fixed[no | yes ], enabled[yes], attribute_name[sql_text | parsing_schema_name | module | action], attribute_value) return integer DBMS_SPM.evolve_sql_plan_baseline(sql_handle, plan_list, plan_name, time_limit[DBMS_SPM.auto_limit | DBMS_SPM.no_limit], verify[YES], commit[YES]) return CLOB #show all non-accepted plans with status changed to accepted. #VERIFY=YES the database executes the unaccepted plans and compare performances againts the SQL plan baseline #VERIFY=NO all unaccepted plans became accepted DBMS_SPM.alter_sql_plan_baseline(sql_handle, plan_name, attribute_name[accepted_status | enabled], attribute_value=>'yes') DBMS_XPLAIN.display_sql_plan_baseline * Views * DBA_SQL_PLAN_BASELINES(sql_handle, sql_text, plan_name, origin[manual-load | manual-sqltune | auto-capture | auto-sqltune], enabled, accepted, fixed, autopurge) SMB SQL Management Base By default uses Automatic Segment Space Management ASSM contains plan history information, the statement log, SQL profiles and plan baselines. Is part of data dictionary and stored in sysaux tbs. sysaux tbs must be online to use SQL Plan Management features that access the SMB DBMS_SPM.configure([SPACE_BUDGET_PERCENT | PLAN_RETENTION_WEEKS], value) #SPACE_BUDGET_PERCENT: if the size of SMB exceed 10% of sysaux tablespace a weekly alert is generated #PLAN_RETENTION_WEEKS: plan not used for more than 1 year are purged * Views * DBA_SQL_MANAGEMENT_CONFIG(SPACE_BUDGET_PERCENT, PLAN_RETENTION_WEEKS)

  10. #5
    Join Date
    Nov 2007
    Location
    Arab world!
    Posts
    6,169
    Blog Entries
    4
    Rep Power
    10

    Default

    SQL Access Advisor

    May be interrupted and resumed. By default is not scheduled as a maintenance task by the AutoTask. Partitioning Recommendations for tables, indexes and materialized views, default mode is interval. Partitioning Recommendations require at least 10000 rows on table, table must have some predicate or join with date or number , space, no bitmap indexes, no interrupted SQL Access Advisor. Advice on tuning Materialized Views, Materialized View Logs, indexes(B-tree, bitmap, function based) * Packages * DBMS_ADVISOR.add_sts_ref(task_name, sts_owner, sts_name) #add an existing SQL tuning set DBMS_ADVISOR.add_sqlwkld_ref(task_name, sts_name) #add an existing SQL tuning set DBMS_ADVISOR.delete_sts_ref(task_name, sts_owner, sts_name) link the SQL Access Advisor and the Workload DBMS_ADVISOR.create_task(advisor_name=>'SQL Access Advisor', task_name) DBMS_ADVISOR.set_task_parameter(task_name, param, value)[partition | segment][def_partition_tablespace, tbs][max_number_partitions,...] [valid_table_list,...][mode,comprehensive][journaling,4][analysis_scope,all] DBMS_ADVISOR.execute_task(task_name) DBMS_ADVISOR.get_task_report(task_name, type[text], level[typical], section[all], owner_name, execution_name, object_id) DBMS_ADVISOR.cancel_task(task_name) wont see any recommendation DBMS_ADVISOR.interrupt_task(task_name) sometimes see recommendation, can't resume or restart DBMS_ADVISOR.create_file(DBMS_ADVISOR.get_task_scr ipt(task_name), directory_name, file_name) DBMS_ADVISOR.copy_sqlwkld_to_sts() DBMS_ADVISOR.quick_tune(DBMS_ADVISOR.SQLACCESS_ADV ISOR, task_name, sql_text) #task is automatically createdADDM Automatic Database Diagnostic Monitor

    control_management_pack_access=DIAGNOSTIC+TUNING and statistic_level=TYPICAL or ALL Analyzes AWR data, present problems and recommendations. Runs automatically by default when a new AWR snapashot is taken. Enabled by default. You can use directive to limit or filter the ADDM * ADDM for RAC Real Application Cluster * Database ADDM(analyze all RAC instances), Instance ADDM(analyze one RAC instances), Partial ADDM(analyze some RAC instances) Analyze: [Global resources and global I/O], [High-Load SQL], [Contention across the instances], [Global cache interconnect traffic], [network latency issues], [skews in instance report times] * Packages * DBMS_ADDM.analyze_db(task_name, begin_snapshot, end_snapshot) #Creates an ADDM task global for all RAC instances DBMS_ADDM.analyze_inst #Creates an ADDM task for a single instance DBMS_ADDM.analyze_partial #Creates an ADDM task for some instances DBMS_ADDM.delete #delete an ADDM task DBMS_ADDM.get_report(task_name) #Gets a report of an ADDM task DBMS_ADDM.delete_finding_directive; DBMS_ADDM.insert_finding_directive(task=>null, directive=>'SGA Directive', finding=>'Undersized SGA', minactivesessions=>5, minpctimpact=>50) #TASK : null means all subsequently created tasks #FINDING: the finding in DBA_ADVISOR_FINDING_NAMES DBMS_ADDM.delete_sql_directive; DBMS_ADDM.insert_sql_directive #Limit action DBMS_ADDM.delete_segment_directive; #Limit action DBMS_ADDM.insert_segment_directive #Prevent ADDM from reporting a schema, segment, subsegment or object DBMS_ADDM.delete_parameter_directive; DBMS_ADDM.insert_parameter_directive(param_name) #Prevent ADDM from altering specified parameter DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER (advisor_name, parameter, value) #Specify the mode in wich to run ADDM [Database | Instance | Partial] * Views * DBA_ADVISOR_FINDINGS(finding_name) #USER_... available. Shows which findings occur most frequently in the db DBA_ADDM_TASKS #USER_... available. Show all executed ADDM tasks DBA_ADDM_INSTANCES #USER_... available. Instance level info for completed ADDM tasks DBA_ADD_FINDINGS #USER_... available. Extends DBA_ADVISOR_FINDINGS DBA_ADVISOR_FINDING_NAMES #USER_... available. Lists all registered finding names DBA_ADVISOR_RECOMMENDATIONS(filtered[Y | N]) DBA_ADVISOR_ACTIONS(filtered[Y | N])Automatic Memory Management

    sga_target=0 #static; pga_target=0 #static; memory_target=900M #dynamic; memory_max_target=2000M; show parameter target; #archive_lag_target=0, db_flashback_retention_target=1440, fast_start_io_target=0, fast_start_mttr_target=0, memory_max_target=900M, memory_target=900M, pga_aggregate_target=0; sga_target=0 sga_target pga_aggregate_target memory_target memory_max_target result 0 0 900M 999M AMM enabled, defaults 60% to SGA and 40% to PGA 50M 100M 900M 999M AAM enabled, SGA minimum 50M, PGA minimum 100M 50M 0 900M 999M AAM enabled, SGA minimum 50M, PGA minimum is memory_target-50M 0 100M 900M 999M AAM enabled, SGA minimum is memory_target-100M, PGA minimum 100M 900M 0 or null AAM enabled, max_memory_target is memory_target. (SGA is not autotuned, PGA is) 50M Automatically tuned only SGA subcomponents. PGA is autotuned. max_memory_target=0 0 or null 999M AAM disabled. memory_target=0 default value * Views * V$MEMORY_DYNAMIC_COMPONENTS(component, current_size, user_specified_size) V$MEMORY_CURRENT_RESIZE_OPS #current resizes in progressOptimizer Statistics Collection

    Automatically during nightly manintenance window, precedence for most needing objects. Can set preferences at table, schema, database and global level. Both global(incrementally but no with histograms) and local statistics for partitioned tables. Statistics job is atomic at schema level Statistics are collected for user-defined function and function-based indexes A pending statistic is not yet available * Options, Statistics preferences * publish[true | false]: def. true. Store statistics in data dictionary, current statistics stale_percent: def. 10%. Threshold level for an object to be stale * Granularity * auto, global, global and partition, all, partition, subpartition * Extended Statistics * multi column or expression. Collects number of distinct values, density, number of nulls, frequency histograms Capture statistics, Test and Publish 1) alter session set optimizer_use_pending_statistics=false; 2) set PUBLISH option to false using SET_TABLE_PREFS procedure #subsequent statistics will be gathered as pending 3) gather the statistics for table; 4) alter session set optimizer_use_pending_statistics=true; 5) test the statistics 6) execute the PUBLISH_PENDING_STATS procedure Capture global statistics for a partitioned table INCREMENTALLY, only on last partition Incremental works if granularity=global and table marked both incremental and publish, estimate_percent=>auto_sample_size[always recommended] 1) set INCREMENTAL value to true use SET_TABLE_PREFS procedure 2) DBMS_STATS.gather_table_stats(ownname=>'ALDO', tabname=>'persone', granularity=>'GLOBAL') * Packages * DBMS_STATS.gather_database_stats_job_proc DBMS_STATS.set_prefs #set preferences at various levels DBMS_STATS.set_table_prefs(schema, table, pref_name, pref_value) #PUBLISH =[true(def.) | false] FALSE: statistics taken will be pending #INCREMENTAL=[true | false(def.)] TRUE: statistics will no be collected for partition that have not been changed DBMS_STATS.set_schema_prefs DBMS_STATS.resume_gather_stats DBMS_STATS.get_prefs(['estimate_percents' | 'stale_percent' | 'publish'], schema, table) DBMS_STATS.publish_pending_stats(schema[null(def.)], table[null(def.)]) #pending stats became public DBMS_STATS.delete_pending_stats(schema[null(def.)], table[null(def.)]) DBMS_STATS.export_pending_stats(schema[null(def.)], table[null(def.)]) DBMS_STATS.create_extended_stats(ownname, tabname, extension=>'(name, surname)') # select DBMS_STATS.create_extended_stats(null, 'persone', extension=>'(name, surname)') from dual; #collect extended statistics. A column group is created. A virtual hidden column is created with a system-defined name DBMS_STATS.drop_extended_stats(ownname, tabname, extension=>'name, surname') DBMS_STATS.show_extended_stats(ownname, tabname, extension=>'name, surname') return varchar2 #return the name of the statistics entrys DBMS_STATS.gather_table_stats(ownname=>null, tabname=>'pippe', method_opt=>'for all columns size skewonly for columns(lower(a),b) skewonly') #coll. ext. stats * Views * DBA_TAB_STATS, DBA_IND_STATS stored published statistics DBA_TAB_STAT_PREFS(owner, table_name, preference_name, preference_value) #shows current settings for statistics preferences DBA_TAB_PENDING_STATS DBA_STAT_EXTENSIONS(extension_name, extension)Result Cache

    In SGA(shared pool). Affected by memory_target, sga_target, shared_pool_size. Recently-used algorithm to age out results. Can cache flashback SQL, can cache query fragments. Used both for SQL queries and deterministic functions. /*Hint*/: result_cache and no_result_cache can use in a inline view Result Cache Memory pool SQL Query Result Cache Pl/SQL Function Result Cache client side * Restrictions * can't use SQL Query Result Cache on: Temporary tables, Dictionary tables, non-deterministic Pl/Sql functions(use deterministic keyword), [currval,nextval,sysdate,sys_timestamp,current_date ,current_timestamp,local_timestamp,userenv,sys_con text,sys_quid], undergoing modification data, subqueries, pipelined functions, functions can't have OUT or IN parameters, no anonymous blocks, no in package with invoker's right, no IN params as LOB, REF CURSOR, RECORD, COLLECTION, OBJECT create or replace function pippo(id number) return pp_record RESULT_CACHE RELIES_ON (persone) is ... #RELIES_ON means that cached results will became invalid when the structure of any database object on which they depend is changed * Packages * DBMS_RESULT_CACHE.memory_report #Reports cache enabled, block size, max cache size, max result size, memory [total | fixed | dynamic] DBMS_RESULT_CACHE.status #enabled/disabled, in RAC shows if synchronizing DBMS_RESULT_CACHE.flush return boolean #return true on successful removal, disable/close the cache DBMS_RESULT_CACHE.bypass(true) before flushing DBMS_RESULT_CACHE.bypass([true | false]) #bypass the cache * Views * V$RESULT_CACHE_STATISTICS #list cache settings and memory usage stats V$RESULT_CACHE_OBJECTS(type, status[new | published | bypass | expired | invalid], name) #list cached objects and attributes V$RESULT_CACHE_DEPENDENCY V$RESULT_CACHE_MEMORY #show all memory blocks and statistics Client Result Cache OCI based, cache only top-level SQL statements no query fragments, excellent on lookup tables. Independent from server result cache client sqlnet.ora init params override the db server init params * Restrictions * no views, no remote objects, no complex types, no flashback, no SQL with pl/sql, no VPD policies on the tables * Views * CLIENT_RESULT_CACHE_STAT$ #show client settings and usage statsAdaptive Cursor Sharing

    Automatic feature can't be turned off, no init params. Works the same for bind vars and literals. 10g: Bind variable not good for plan generation, first time will determine the plan forever(bind peeking) 11g: Generate new child cursors Bind sensitive cursor: values are so different that a new plane is needed. New plans are added to plan history but not used until db has proved a better performance. See optimizer_capture_sql_plan_baselines. Once the plane is changed the cursor became bind-aware, more plans will be used depending on params * Views * V$SQL(is_bind_sensitive, is_bind_aware) V$SQL_CS_HISTOGRAM #shows distribution of exceution count across execution history histogram V$SQL_CS_SELECTIVITY #shows selectivity ranges stored in cursor for predicates with bind variables V$SQL_CS_STATISTICS #stats for different bind sets, buffer gets and CPU timeFlashback Data Archive FDA - Oracle Total Recall

    Requires Automatic Undo Management. Archivelog is not required. Data archive tablespaces must use ASSM Store data changes compressed for any length of time you want in one or more tbs in an internal history table for each table tracked. Only for some tables you choice. You can't modify data in Flashback, only purge. Table/s must be enabled for tracking(flashback archiving). Can set different retention for different tables, when in different archives, assigning different flashback data archives. Retention at table level, must be specified for data archive. An internal historical table is created with few more columns and range partitioned. FBDA process, sleep time automatically adjusted. Can use flashback SQL(as of). Old data is automatically purged. May create default archive(not required) * Differences from Flashback Database FD* FD may take db back in time, while FDA is read only. FDA is online FD is offline. FD at db level while FDA at table level. * Limitations * no [drop | rename | modify] column, yes add column. no partition operations. no LONG to LOB conversion. no [drop | truncate] table SQL>grant flashback archive ADMINISTER to user; #ADMINISTER allows [create flashback archive, alter flashback archive, drop flashback archive] SQL>grant flashback archive on t1 to user; #user also need select privs, execute on DBMS_FLASHBACK SQL>create flashback data archive f1 tablespace tbs1 quota 2000m RETENTION 4 year; #RETENTION must be specified for data archive alter flashback archive f1 purge all; alter flashback archive f1 purge before timestamp(systimestamp - interval '2' day); alter flashback archive f1 purge before scn 1233 alter table t1 noflashback archive f1; #all flashback data is removed alter flashback archive fla1 modify retention 1 year select * from t1 versions between timestamp to_timestamp(...) and maxvalue * Views * DBA_FLASHBACK_ARCHIVE(flashback_archive_name, retention_in_days, status), DBA_FLASHBACK_ARCHIVE_TS(quota_in_mb), DBA_FLASHBACK_ARCHIVE_TABLES Flashback Transaction Blackout database in archivelog, undo and redo data are needed. supplemental logging and primary key supplemental logging at db level must be enabled. Transaction dependencies: write-after-write: dependent transaction modifies data modified by parent transaction primary key constraint: dependent transaction reinserts the primary key deleted by parent transaction alter database add supplemental log data; alter database add supplemental log data (primary key) columns; grant execute on dbms_flashback to hr; grant select any transaction to hr; grant insert, update, delete on pippo to hr; #grant DML on specific involved tables * Packages * DBMS_FLASHBACK.transaction_backout(numberofxids, xids(array), options[see below...], scnhint(SCN at the start of transaction) or timehint) #no commit inside! nocascade: (def.) no depent transactions expected cascade: dependent transactions are backed out before the parent nocascade_force: dependent transactions are ignored noconflict_only: only those rows with no conflict in parent transactions * Views * DBA_FLASHBACK_TRANSACTION_STATE #backed out transactions DBA_FLASHBACK_TRANSACTION_REPORT Oracle SecureFiles

    10g lob->11g BasicFiles transparent encryption. variable chunk size max 64M stored next to one another. redo generated only for changed data. new client/server network for faster data transfer. Every option may be changed later using . long api can't be used to configure SecureFile setting. COMPATIBLE min 11.0. Error raisen if you use SecureFile capabilities in a BasicFile. 1) DEDUPLICATION detect and avoid duplicated data, Advanced Compression is required, only one copy is saved, deduplication is disabled by default 2) COMPRESSION [HIGH | MEDIUM(def.)], compression is disabled by default 3) ENCRYPTION must use Advanced Security Option [AES128 | 3DES168 | AES128 | AES192 | AES256], encryption is disabled by default Older storage cluses(chunk, pctversion, freelist) no necessaries with SecureFiles, instead use this: MAXSIZE: maximum LOB segment size RETENTION: version control policy MAX: after MAXSIZE old version is used MIN: old version is retained for at least the time specified AUTO: default. NONE: old version is use ad much as possible create table pippo(l clob) LOB(l)STORE AS [securefile | lob] (COMPRESS [HIGH | MEDIUM] deduplicate lob [CACHE | NOCACHE | CACHE READS] nologging); #STORE AS is optional, default is LOB traditional. #CACHE will place lob pages in the buffer cache for speedier access, default is NOCACHE, CACHE READS cache only during reads not for writes. #NOLOGGING will not generate any redo #COMPRESS [HIGH | MEDIUM(def.)] #(NO) KEEP DUPLICATES #specify whether the database will store duplicates for the LOB column create table pippo (doc clob)lob(doc) store as securefile(ENCRYPT); create table pippo (doc clob encrypt using 'AES256')lob(doc) store as securefile; Migrating to SecureFiles * Partition Exchange * Ensure you have enough space for largest partition, long maintenance window, segment offline * Online Redefinition (recommended) * No segment offline, migration in parallel, index must be rebuild, double space required. Materialized view must be refreshed if the source table is redefined. create table tab1(id number, c lob) lob(c) store as lob; create table tab1(id number, c lob) lob(c) store as securefile; DBMS_REDEFINITION.start_redef_table('scott', 'tab1', 'tab2', 'id id c c'); DBMS_REDEFINITION.copy_table_dependents('scott', 'tab1', 'tab2', 1, true, true,true,true,false, error_count); DBMS_REDEFINITION.finish_redef_table('scott', 'tab1', 'tab2'); * Packages * DBMS_LOB.getoptions #return lob settings DBMS_LOB.setoptions DBMS_LOB.get_deduplicated_regions DBMS_SPACE.space_usage #determine disk space used by all lob in segment, only for ASSM segments * Views * DBA_SPACE, DBA_LOBLocking Enhancements

    By default a DDL will fail if can't get an immediate DML lock on the table. lock table pippo in exclusive mode [nowait | wait(def.) (optional)] #if mode omitted will wait undefinitely The following statements will not require a lock: create index online; create materialized view log; alter table enable constraint novalidate;Minimal invalidation - Fine Grained Dependency Management

    Only logical affected objects are invalidated such as views and synonyms during online redefinition, triggers have not been enhanced they will be invalidated.Parameter file creation from Memory

    create [spfile | pfile] from memory; #will have values for all 150 init parameters even if not precedently setHot Patching

    Supported on RAC opatch enable, disable and install patches. Currently only for Linux, Solaris. Consumes extra memory depending on the number of currently running Oracle processes. At least one OS page of memory(4-8kb) for Oracle process is required. opatch query -is_online_patch #determine if patch is hotInvisible Indexes

    Invisble to the optimizer, discarded from plans. Can change from in/visible at any time. create index inv1 on pippo(nome) invisible; alter index i1 in/visible; * Views * DBA_INDEXES(visibility)Shrinking Temporary Tablespaces

    May shrink online temporary tablespaces and single tempfiles. Shrink to a minimum of 1MB(def.) alter tablespace temp shrink space [keep 100m]; #Not equally distributed along tempfiles alter tablespace temp shrink tempfile '/pippo.dbf' [keep 100m] * Views * DBA_TEMP_FREE_SPACE(tablespace_name, free_space) #FREE_SACE in bytes. Space unallocated and allocated space available to be reusedTablespace Option for Temporary tables

    May manually specify a temporary tbs, indexes are created in the same tablespacePL/SQL and Java Automatic Native Compilation

    see parameter plsql_code_type and plsql_optimize_level alter procedure pioppo compile pl_sql_code_type=native; * Recompiling a Database for Pl/Sql Native Compilation * 1) shutdown immediate; 2) plsql_code_type=native, plsql_optimize_level=3 3) startup upgrade 4) @?/rdbms/admin/dbsupgnv.sql 5) restart and utlrp.sql * Views * DBA_PLSQL_OBJECT_SETTINGS(plsql_code_type)OLTP Table Compression

    Writes not degraded reads improved because data is directly read as compressed. Disk space and memory are minimized, cpu is increased. May compress data during dml, not only when bulk loading(10g) or when creating CTAS(10g). Holes made by deletes are eliminated Compression may be enabled for tables, partitions and tablespaces. A block is compressed when reaches PCTFREE. create table pippo(...) compress; #normal compress create table pippo(...) compress for direct_load_operations #normal compress for a data warehouse workloads table not for OLTP create table pippo(...) compress for all operations #compress for dml operations alter table pippo compress for all operations #An existing table may be altered to compress but existing data remain uncompressed. * Views * DBA_TABLES(compression, compress_for)Direct NFS Client

    Simplify manageability across multiple platforms, performance are increased but kernel NFS is not completely bypassed since its used for network communication Avoid kernel NFS layer. Load balancing. You may control I/O path. OS must perform the mount. NFS version 3 protocol implemented in Oracle RDBMS kernel. If mismatch between oranfstab and OS mount points the NFS is stopped. To remove an NFS path in use restart the database. Direct NFS client will disabled if delete oranfstab or modify its setting or replacing ODM NFS with the stub libodn11.so If the database can't open the NFS using Direct NFS it will use OS kernel * Configuring * 1) Specify mount point in /etc/mtab first default location and be sure the OS perform the mount 2) (optional) edit oranfstab for Oracle specfic options such as additional paths to a mount point. Mount point search order: (optional) $ORACLE_HOME/dbs/oranfstab (optional) /etc/oranfstab (when no oranfstab) /etc/mtab 4) Enable Direct NFS replacing standard ODM library libnfsodm10.so with ODM NFS library $ORACLE_HOME/lib/libodm11.so oranfstab format: server: TestServer1 #nfs server name path: 130.33.34.11 #up to 4 network paths if the first fail, load balancing export: /vol/oradata1 #exported path from NFS server mount: /mnt/oradata1 #local mount point * Views * V$DNFS_STATS #show performance stats for Direct NFS V$DNFS_SERVERS #show servers accessed by Direct NFS V$DNFSFILES #files currently using Direct NFS V$DNFS_CHANNEL #open network paths



  11. Forum Ads:

  12. #6
    Join Date
    Nov 2007
    Location
    Arab world!
    Posts
    6,169
    Blog Entries
    4
    Rep Power
    10

Similar Threads

  1. Replies: 1
    Last Post: 13-09-2008, 01:40 AM
  2. [offer] Cbt Nuggets Security +
    By maw_1984 in forum Networks
    Replies: 2
    Last Post: 12-09-2008, 06:15 PM
  3. A big surprising offer (PLC for sale)
    By mmy012 in forum Engineers discussions
    Replies: 0
    Last Post: 30-08-2008, 11:59 AM
  4. Good Job offer for Civil/Structural Engineers
    By redheart in forum Civil and architecture
    Replies: 3
    Last Post: 01-05-2008, 06:04 PM
  5. [offer] Sms 2003 Vdo Training, Its not CBT Nuggets
    By Mohamed Fouad in forum Microsoft
    Replies: 5
    Last Post: 17-02-2008, 01:12 PM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

أقسام المنتدى

الروابط النصية

تابع جروبنا على الفيس بوك

صفحة Egypt Engineers على الفيس بوك

تابعنا على linkedin

جروبنا على الياهو جروب