Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 22 min 53 sec ago

Catastrophic Database Failure -- Deletion of Control and Redo Files

Wed, 2024-04-10 14:26
We recently had a database failure that resulted in data loss after an Oracle 19.3.0.0.0 database had both both its control, and redo log files deleted. Please note that I am not a DBA, but simply an analyst that supports the system that sits on this Oracle database. Any amount of data loss is fairly serious, and I am wondering how we avoid this in the future. Before the control, and redo files were deleted, we had an event wherein the drive this database is on was full. This caused the database stop writing transactions, and disallowed users from accessing the application. Once space was made on this drive, the database operated normally for several hours until...the redo, and control files were deleted. What would have caused the control, and redo files to be deleted? In trying to figure out what happened, it was noted that if we had expanded the drive's memory in response to its becoming full, the later data loss would not have happened. Does Tom agree with that sentiment? Are these two events linked (disk drive nearly full and later data loss), or are they symptomatic of two different things?
Categories: DBA Blogs

Is 'SELECT * FROM :TABLE_NAME;' available?

Tue, 2024-04-09 20:06
Is 'SELECT * FROM :TABLE_NAME;' available?
Categories: DBA Blogs

Is fragmentation an issue ?

Mon, 2024-04-08 07:06
Hai all, I have 1000 number of tables. some of the tables got delete rows and updated the fragmentaion is created. How to determine which tables are fragmented ?
Categories: DBA Blogs

19c doesn't allow truncation of data that is longer in length of column's char(40) definition

Mon, 2024-04-08 07:06
We have an application that has been written to insert a variable that is char(50) into a column that is defined as char(40). In Oracle 11g (I know this is very old) it would merely truncate the last 10 characters without issue. However, Oracle 19c doesn't allow this and raises an exception (which I believe should've always been the case). Where can I find documentation of this restriction and when it was changed and is there away around this other than changing the program code? Oracle 11 truncated that extra 10 characters in the below statemt ADBBGNX_ADDRESS_LINE_1 := agentrecord.producerrec.businessAddressLine1; Oracle 19 throws an exception with a NULL error status.
Categories: DBA Blogs

Returning data in EXECUTE IMMEDIATE with dynamic values in USING clause

Mon, 2024-04-08 07:06
Hi Team I have below scenario. Step#1) User clicks to particular App UI screen. Step#2) User selects multiple filters on UI - say filter1, filter2 which correspond to table columns. Step#3) For each filter selected by user, he needs to enter data - say Mark (for filter1), Will (for filter2) based on which search will be performed on the respective filters (aka table columns). Step#4) User inputs from above Steps#2, 3 are passed to PLSQL API which returns desired SQL result in paginated manner (pageSize: 50). User inputs from Step#2, 3 will be dynamic. I have tried to implement this using native dynamic SQL, but looks like I have hit an end road here. Able to use dynamic values in "USING" clause, but not able to return the data from SELECT statement with EXECUTE IMMEDIATE. Shared above LiveSQL link which has re-producible test case. If I comment line "BULK COLLECT INTO l_arId, l_arName, l_arType" in the procedure, the block executes successfully. But I need the result set from SELECT statement in procedure as output. Looking for some advise here. Thanks a bunch!
Categories: DBA Blogs

Restore from Archivelog only

Mon, 2024-04-08 07:06
Hello Sir, I am able to get one scenario to work and that scenario was where I had a VM (server) running Oracle 19c with just 1 table 5 records and I did a backup of the whole VM (disk backup) and now I added a new table in my db with 3 records (ensured db is in Archivelog mode) and then I ran: rman target / backup database plus archivelog; Now I went ahead and added 2 more records and noted the system time lets say **2024-04-06 15:33:55 ** (so I can restore upto this time). So basically a new table with 5 records. Once all this done I ran below command: backup incremental level 1 database plus archivelog; Now I deleted my VM and restored the first Old copy of my VM backup (one that had 1 table n 5 records), post this VM restore. I followed the steps below and I was able to get Point in time recovery to work up to 2024-04-06 15:33:55 (here now I should have 2 tables each with 5 records each). The main step which I had missed earlier was RESTORING the control file since I was doing restore on a different (new VM) server: sql>> shutdown abort; rman<code>>> startup nomount; rman >>RESTORE CONTROLFILE FROM "/mnt/orabkup1/snapcf_ev.f"; rman>> startup mount; rman >>run { SET UNTIL TIME "TO_DATE('2024-04-06 15:33:55', 'YYYY-MM-DD HH24:MI:SS')"; RESTORE DATABASE; RECOVER DATABASE; sql `ALTER DATABASE OPEN RESETLOGS?; }</code> Everything good here and with this approach I was able to get the Point in time recovery to work. I was missing that restore of the control file. Now the scenario which I am still not able to work out and I am sure I am making a very basic mistake (may be I dont understand the archivelog and redolog properly). The scenario I want to make work is : I have VM backup (disk backup) upto a level of 1 Table and 5 records. Then I create 2nd table and add lets say 2 records to it and this time I only take ARCHIVELOG backup and then add 3 more records and then backup incremental archivelog all and I note the time (lets assume '2024-04-06 15:33:55) with following steps: <code> backup archivelog all; insert into xxx VALUES(3,'Line 1'); insert into xxx VALUES(4,'Line 1'); commit; backup incremental level 1 archivelog all</code>; Here I have not done backup database plus archivelog (assuming all those new inserts would be in redolog and may be in archivelog?). Now I delete this VM and restore from disk backups a new VM from backup1 (where only 1 table 5 records) exists and now I simply run following: <code>shutdown abort; startup nomount; RESTORE CONTROLFILE FROM "/mnt/orabkup1/snapcf_ev.f"; startup mount; run { SET UNTIL TIME "TO_DATE('2024-04-06 15:33:55', 'YYYY-MM-DD HH24:MI:SS')"; RESTORE ARCHIVELOG all ; RECOVER DATABASE; sql `ALTER DATABASE OPEN RESETLOGS?; } </code> But unfortunately it complains about ORA-01194: file 1 needs more recovery to be consistent ORA-01110: data file 1: 'D:\BASE\MYDB\DATA\SYSTEM01.DBF' Not sure why this, as I was thinking that Arc...
Categories: DBA Blogs

Does Migrating 4k Tablespace block size to 8k database cause performance impact ?

Mon, 2024-04-08 07:06
I am migrating 11g database cross endianness from on-prem to EXACS . On-prem database db_block_size is 4k and all the tablespaces are also of 4k block size . <u>Since, I cannot provision non-standard block size database in OCI</u> , I am worried about the performance impact caused by different block size. Please help me understand what database block size would be recommended for the below scenario. <code> ----------------------------------------------------------- Source : ON_PREM ----------------------------------------------------------- Platform / ID : AIX-Based Systems (64-bit) / 6 Version : 11.2.0.4.0 Size (GB) : 17 TB db_block_size : 4k All Tablespaces BLK Size : 4k ----------------------------------------------------------- Target : OCI - EXACS ----------------------------------------------------------- Platform / ID : LINUX / 13 Version : 11.2.0.4.0 Size (GB) : 17 TB db_block_size : 8K APP Tablespaces BLK Size : 4k SYSTEM/SYSAUX/TEMP/UNDO : 8K </code> Phase 1: Migrating from AIX 11g to EXACS 11g Phase 2: 19c upgrade and Multi tenant {<i>Due to business requirement we have to split migration and upgrade</i>} <b>Question : </b> 1. Can we guarantee that there will be no performance impact due to difference in tablespace and database block size if db_4k_cache_size parameter is set adequately to large value . 2. Or Better to go for same 4k block size as source on-premises database. Off course application regression testing and RAT will be included , but testing both cases is not feasible, hence reaching for expert advice .
Categories: DBA Blogs

Oracle to Power BI

Thu, 2024-04-04 10:26
How to connect oracle database / data set into Power BI, I already searched from google and youtube but I can't do it please help. Thank you.
Categories: DBA Blogs

Compiling Java class

Thu, 2024-04-04 10:26
Hi, We are trying to compile following class in Oracle database 23c. However, we are encountering surprising error (we are not using ANY database link: Error report - ORA-04054: database link MYOPTIMIUM does not exist 04054. 00000 - "database link %s does not exist" *Cause: During compilation of a PL/SQL block, an attempt was made to use a non-existent database link. *Action: Either use a different database link or create the database link. Java class: <code>create or replace and compile java source named "llm" as import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.io.Reader; import java.sql.Clob; import java.sql.SQLException; import javax.script.ScriptEngineFactory; import javax.script.ScriptEngineManager; public class llm { public static String postPrompt(String appSource, String appSourceID, String targetllm, String param4, Clob pprompt) throws IOException, SQLException { try (Reader reader = pprompt.getCharacterStream()) { if (reader != null) { try (BufferedReader bufferedReader = new BufferedReader(reader)) { StringBuilder stringBuilder = new StringBuilder(); String line; while ((line = bufferedReader.readLine()) != null) { stringBuilder.append(line); } String clobData = stringBuilder.toString(); ProcessBuilder pb = new ProcessBuilder("python3", "/opt/oracle/Optimium/python/app/optimium/invokellm.py", appSource, appSourceID, targetllm, param4, clobData); Process p = pb.start(); StringBuilder out = new StringBuilder(); BufferedReader in = new BufferedReader(new InputStreamReader(p.getInputStream())); String thisLine; while ((thisLine = in.readLine()) != null) { out.append(thisLine); } return out.toString(); } } } return ""; } } / </code> Thanks Sammeer
Categories: DBA Blogs

SQL Performance differences with STATS gather

Mon, 2024-03-25 12:21
We have seen in many situations in our environment where a SQL was running badly but the plan for the query has not changed. When we gather stats for the associated table.we see that same query performs significantly better. However there is no change in PHV of the execution plan. My Question is if the PHV is staying same then that means execution plan remains the same then why is the performance varying.Are table statistics used by the optimizer even after plan is generated?
Categories: DBA Blogs

How does Oracle Database provide the user$.user#

Mon, 2024-03-25 12:21
Hi Toms, eventually, after many years, I came across e question I never realized. Indeed I have to face a customer, who uses the user$.user# for application purposes. Will say after creating a user, that application stores the user# within application tables columns, say USR_ID, which, subsequently leads to the need that user$.user# has to match the USR_ID. In consequence, if you have to migrate that application via epxdp / impdp (we have to, as we migrate from Solaris to Linux) these IDs won't match anymore as the users on the new database are created with different user$.user#. You do not have to tell me that THAT application needs "some redesign"... However, I have some questions regarding user$.user#. As far as I have seen / read, when creating a new user using the usual "create user" statement the new users user# is provided by oracle rdbms as the user# of "_NEXT_USER". _NEXT_USERs user# serves as a high water mark, even when dropping the user again _NEXT_USERs user# won't decrease (looks like an Oracle maintained sequence is used), so creating and dropping users leades to unused ranges of numbers in user$.user#. Questions: - Which sequence does provide the number of _NEXT_USER? - Is there any way to reset it? - or is there any way to influence the user$.user# or the number that is provided by rdbms to be stored as user$.user#? => I assume this may result in corruption but perhaps there is a way. Thanks and best regards - dietmar
Categories: DBA Blogs

AUD$ EXP/TRUNCATE/IMP (feasibility)

Mon, 2024-03-25 12:21
We are going to MOVE the TBS of AUD$ table in PROD. Purpose: AUD$ table is totally fragmented and the CLEANUP / PURGE runs very slow - even with max 1.000.000 batch size but as per test in our test environment we had some issues regarding using API (DBMS_AUDIT_MGMT) to MOVE TBS on AUD$. And we are using STANDARD AUDIT TRAIL! SELECT * FROM dba_audit_mgmt_config_params where audit_trail ='STANDARD AUDIT TRAIL' and parameter_name='DB AUDIT TABLESPACE'; DB AUDIT TABLESPACE CLARITYAUDIT <b>STANDARD AUDIT TRAIL</b> But the MOVE worked now after we got some action plan from oracle support to fix the issue in TEST env. so that the MOVE went through via API. Now we are planning to do the TBS MOVE of AUD in PROD (<b>online</b>!). But we need to have a fallback plan, in case the MOVE hangs, or/and the data in AUD table get inconsistence. so the fallback plan is: 1) EXP the data in a downtime (disable audit trail) and keep the dump file on the server. but with parameter "DATA_ONLY" - as metadata (table DDL) would still be there. 2) run the MOVE TBS on PROD via API (DMBS). 3) if it goes through and AUD$ is accessible and purgeable, we are good - if not, we need to truncate the data in AUD$ and IMP the SAVED data (as per EXP/dump file) - again with parameter "DATA_ONLY" So i hope thats clear enough. The question is now if step 3 would work or not - we are also planning the to test the step 3 in our TEST env. but we are concerned , If this action plan (especially step 3) could impact the PROD - in case we needed to go for the fallback plan (!?). We would also appreciate any other action plan/ option to save and recover data in AUD$ , in the above scenario. Thank you! Ali
Categories: DBA Blogs

Question - increase size of number column

Mon, 2024-03-25 12:21
We just hit 2.1 billion row count on a table with primary key INT. This is the worse thing to happen :( Any one know if we can do alter without requiring space on the DB for the entire table?
Categories: DBA Blogs

SPM and GTTs

Tue, 2024-03-19 03:26
Howdy, I'm wondering about how SPM and things like https://blogs.oracle.com/optimizer/post/what-is-add-verified-spm would be impacted by the presence of global temporary tables within the query(s). I've been looking for documentation that would outline how SQL plan management would behave when dealing with queries relying on GTTs but I haven't had any luck so far. Basically I'm curious how reliably baselines, evolving, etc can/do work when dealing with queries that could have wildly different data sets based on GTTs within the query. Cheers,
Categories: DBA Blogs

DR setup involving replicated database

Mon, 2024-03-18 09:06
Howdy, The current set up I'm looking at is an OLTP production system running Oracle 19.20 (4 instance RAC) with active data guard. This system is seeding a data warehouse running Oracle 19.20 by way of Oracle GoldenGate via an integrated extract. At present the warehouse does not have a DR solution in place and that's the point of the post. I'm wondering what the best solution would be for a warehouse DR strategy when GoldenGate is in play like this. I assume data guard again but happy to hear other thoughts. The bulk of the questions I have involve the GoldenGate component. I'm not sure how that would need to be set up / configured in order to minimize the complexity in any role transitions from either the transactional or warehouse (or both); and what scenarios can be handled seamlessly and which would require manual intervention. Thanks a bunch! Cheers,
Categories: DBA Blogs

Gather STATS on Partitioned Table and Parallel for Partitioned Table

Mon, 2024-03-18 09:06
hi I have a Partitioned(List) table by a VERSION_ID, which has around 15 million per partition. We have daily partitioned ID created bulk insert for 15 Million rows with 500 columns and then have 10 updates(MERGE UPDATE) for multiple columns from multiple other tables. is it good to gather stats after insert once and then after multiple update once. What is good practice for performance in gather stats for these partitioned table scenarios's second question, when i use merge on partition table from other partioned table, i am seeing the below in explain plan when i use Parallel DML hint. PDML disabled because single fragment or non partitioned table used
Categories: DBA Blogs

Update Partition table from another Partition table Performance issue

Mon, 2024-03-18 09:06
Hi I am migrating from Sybase IQ to Oracle 19C. there are many updates happening from one or multiple tables. My Target_TBL Table has 18 Million records per partition and there are 1000's of Partitions. (Partitioned by VersionID). APP_ID is one of the another key column in this table. I have 10 Partitioned tables which are partitioned by APP_ID which has around 10 Million to 15 Million Records. I have 5 non-partitioned Lookup tables which are smaller in size. I have rewritten all the Update statements to Merge in Oracle 19C, all the updates happen for one VersionID only which is in the where clause, and I join the source table using APP_ID and other keycolumn to update 70 to 100% of the records in each updates 1. Target table has a different key column to update the table from partitioned Source tables which are 10 to 15 Million. i have to do this by 10 different Merge Statements 2. Target Tables have different key columns to update from Non-partitioned Lookup table , I have to do this 5 different merge statements In sybase IQ all the multiple updates are completed in 10 Minutes, in Oracle 19C it takes more than 5 hours. I have enabled parallel Query and Parallel DML also. A) Can you suggest a better way to handle these kind of updates B) In few places the explain plan shows (PDML disabled because single fragment or non partitioned table used) . C) I leave the large Source table updates to go with has join's D) I Force the Lookup source table updates to use Neste Loop. Is this good or Not ? E) if i need to use indexes, can i go with local/global Other key column reference for Lookup tables. Appreciate any other suggestions to handle these scenarios. example <code> Merge INTO Target_TBL USING SOURCE_A ON (SOURCE_A.APP_ID=Target_TBL.APP_ID and SOURCE_A.JOB_ID=Target_TBL.JOB_ID) When Matched then update set Target_TBL.email=SOURCE_A.email Where Target_TBL.VersionID = 100 and SOURCE_A.APP_ID = 9876; Merge INTO Target_TBL USING SECOND_B ON (SECOND_B.APP_ID=Target_TBL.APP_ID and SECOND_B.DEPT_ID=Target_TBL.DEPT_ID) When Matched then update set Target_TBL.salary=SECOND_B.salary Where Target_TBL.VersionID = 100 and SECOND_B.APP_ID = 9876; Merge INTO Target_TBL USING Lookup_C ON (Lookup_C.Country_ID=Lookup_C.Country_ID) When Matched then update set Target_TBL.Amount_LOCAL=Lookup_C.Amount_LOCAL Where Target_TBL.VersionID = 100; </code>
Categories: DBA Blogs

Number Data type declaration with different length and performance impact

Mon, 2024-03-18 09:06
1. I have few number column with data type declared as Number, Number (5), Integer, Numeric(10). I know in few cases the maximum data is 2 digits and I see that is declared as Number(38)/ NUMBER / Numeric(30) /Integer if i don't declare as number(2), instead if i declare as ( Number(38)/ NUMBER / Numeric(30) /Integer) will there be any performance issue when I have a table with millions of records and that is used in updating the data or used in Where clause 2. Varchar2 I have a column with 1 character (Y/N) if i declare this as Varchar2(1 CHAR) instead of VARCHAR2(1 BYTE). Will there be any performance issue when we use this column in where condition for millions of records? 3. IS it advisable to use ANSI Datatypes in table declaration or always preferable to use Oracle Data types, will there be any performance issue? Please advise
Categories: DBA Blogs

PLS-00103: Encountered the symbol "RECORD" when expecting one of the following: array varray table object fixed varying opaque sparse

Thu, 2024-03-14 13:06
Here i am creating record type with reocrds emp and dept with following syntax <code>CREATE TYPE emp_dept_data IS RECORD (empno number(4), ENAME VARCHAR2(10), JOB VARCHAR2(9), HIREDATE DATE, SAL NUMBER(7,2), DNAME VARCHAR2(14) );</code> I am getting error as PLS-00103: Encountered the symbol "RECORD" when expecting one of the following: array varray table object fixed varying opaque sparse Please tell me how to fix it i am using oracle 19C version Record type is used in pipelined function
Categories: DBA Blogs

date fomat containing timezone data

Tue, 2024-03-12 06:26
I would like to know if it is possible to configure the Oracle data format to also capture the timezone that date and time orginated.
Categories: DBA Blogs

Pages