Table of Contents
Problem Statement
Disclaimer:
You’re about to dive into our most detailed and complex diagnostic yet.
Buckle up and read through the end to elucidate this thrilling mystery !!!!
Note: We completed this diagnostic with collaborative efforts and expert guidance of the Oracle Support and Development team.
A few days after completing the cross-platform migration from Solaris to Linux, the application team began encountering the following errors while collecting statistics on certain tables:
ORA-20011: Approximate NDV failed: ORA-12801: error signaled in parallel query server P00Q, instance demodb-abcdef2:DEMODB1
ORA-08103: object no longer exists
In addition, any DML or DDL operations (such as TRUNCATE or INSERT) on these specific tables were also failing with the following error:
ORA-00600: internal error code, arguments: [ktsBlkChekError], [449], [2835349513]
Subsequently, within a few minutes of encountering the ORA-00600 error, the database instance crashed.
Diagnosis
Before initiating our detailed diagnostics, we first reviewed the recent activities performed on this database. We identified that the database recently underwent a cross-platform migration from Solaris to Linux, following the steps outlined in the Oracle document below:
V4 Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2471245.1)
The source database was running Oracle version 12.1.0.2, while the target database was on version 19.25. After the migration, team configured tablespace encryption and then collected routine table statistics for all tables. The application functioned normally for approximately 4 to 5 days after the migration.
Subsequently, the application team began encountering the errors described in our problem statement.
A proactive Service Request had already been raised with Oracle Global Support, and we initiated our diagnostics in collaboration with the Oracle support team.
To begin our diagnostic process, we divided the analysis into three distinct phases:
- Identify the root cause of the database crash.
- Investigate the occurrence of the ORA-00600 error.
- Determine the cause of the ORA-08103 error.
Identify the root cause of the database crash
Our investigation began with a review of the database alert log from the date the issue was reported.
The first corruption related error was identified at this point. In addition to the corruption error, we also observed a message indicating that an index on the archive table DEMOUSER.ARCH1_DEMO_TABLE was marked UNUSABLE. Upon discussing this with the application team, they confirmed that a TABLE MOVE COMPRESS operation had recently been executed on the same table as part of routine purging activity.
Interestingly, the timestamp of the index unusable message in the alert log aligned exactly with the timing of the first corruption error. However, at this stage, it was unclear whether these two events were directly related.
Some indexes or index [sub]partitions of table DEMOUSER.ARCH1_DEMO_TABLE have been marked unusable ***************************************************************** An internal routine has requested a dump of selected redo. This usually happens following a specific internal error, when analysis of the redo logs will help Oracle Support with the diagnosis. It is recommended that you retain all the redo logs generated (by all the instances) during the past 12 hours, in case additional redo dumps are required to help with the diagnosis. ***************************************************************** 2025-05-16T10:31:50.703660+05:30 Corrupt Block Found TIME STAMP (GMT) = 05/16/2025 10:31:49 CONT = 0, TSN = 46, TSNAME = DEMODATA RFN = 676, BLK = 9, RDBA = 2835349513 OBJN = 4294967295, OBJD = 4294967295, OBJECT = , SUBOBJECT = SEGMENT OWNER = , SEGMENT TYPE =
Shortly thereafter, the database diagnostic process indicated a potential hang, linked to session ID 6874, serial# 62636, OSPID 220965.
2025-05-16T10:34:43.767508+05:30
DIA0 Critical Database Process As Root: Hang ID 4 blocks 51 sessions
Final blocker is session ID 6874 serial# 62636 OSPID 220965 on Instance 1
If resolvable, instance eviction will be attempted by Hang Manager
2025-05-16T10:36:47.437416+05:30
DIA0 Critical Database Process As Root: Hang ID 5 blocks 65 sessions
Final blocker is session ID 6874 serial# 62636 OSPID 220965 on Instance 1
If resolvable, instance eviction will be attempted by Hang Manager
2025-05-16T10:41:24.192388+05:30
DIA0 Critical Database Process As Root: Hang ID 6 blocks 83 sessions
Final blocker is session ID 6874 serial# 62636 OSPID 220965 on Instance 1
If resolvable, instance eviction will be attempted by Hang Manager
2025-05-16T10:41:50.479744+05:30
DIA0 Critical Database Process Blocked: Hang ID 6 blocks 85 sessions
Final blocker is session ID 6874 serial# 62636 OSPID 220965 on Instance 1
If resolvable, process termination will be attempted by Hang Manager
Errors in file /u02/app/oracle/diag/rdbms/demodb/DEMODB1/trace/DEMODB1_dia0_220886.trc (incident=1058593):
ORA-32701: Possible hangs up to hang ID=6 detected
Incident details in: /u02/app/oracle/diag/rdbms/demodb/DEMODB1/incident/incdir_1058593/DEMODB1_dia0_220886_i1058593.trc
The Database Writer process (DBW1) blocked multiple sessions from performing write operations while becoming unresponsive itself for several minutes.
-------------------------------------------------------------------------------
Chain 1:
-------------------------------------------------------------------------------
Oracle session identified by:
{
instance: 1 (demodb.demodb1)
os id: 309623
process id: 480, oracle@demodb-abcdef
session id: 4
session serial #: 44916
module name: 0 (DDTEK ODBC Oracley9qhg2 (TNS V1-V3))V3) V1-V3))
}
is waiting for 'buffer busy waits' with wait info:
{
p1: 'file#'=0x3
p2: 'block#'=0x1a4c00
p3: 'class#'=0x8
time in wait: 27.007935 sec
timeout after: never
wait id: 3
blocking: 0 sessions
current sql_id: 1607123340
wait history:
* time between current wait and wait #1: 0.013581 sec
1. event: 'SQL*Net message from client'
time waited: 0.000368 sec
wait id: 2 p1: 'driver id'=0x28444553
p2: '#bytes'=0x1
* time between wait #1 and #2: 0.000005 sec
2. event: 'SQL*Net message to client'
time waited: 0.000000 sec
wait id: 1 p1: 'driver id'=0x28444553
p2: '#bytes'=0x1
* time between wait #2 and #3: 0.001598 sec
3. event: 'ges resource directory to be unfrozen'
time waited: 0.000001 sec
wait id: 0
}
and is blocked by
=> Oracle session identified by:
{
instance: 1 (demodb.demodb1)
os id: 169772
process id: 99, oracle@demodb-y9qhg1
session id: 4049
session serial #: 35254
module name: 0 (JDBC Thin Client-y9qhg2 (TNS V1-V3)o.xdoV1-V3))
}
which is waiting for 'log file switch (checkpoint incomplete)' with wait info:
{
time in wait: 8 min 45 sec
timeout after: never
wait id: 334
blocking: 13 sessions
current sql_id: 100652897
wait history:
* time between current wait and wait #1: 0.000351 sec
1. event: 'SQL*Net message from client'
time waited: 10.719104 sec
wait id: 333 p1: 'driver id'=0x28444553
p2: '#bytes'=0x1
* time between wait #1 and #2: 0.000026 sec
2. event: 'SQL*Net message to client'
time waited: 0.000000 sec
wait id: 332 p1: 'driver id'=0x28444553
p2: '#bytes'=0x1
* time between wait #2 and #3: 0.000096 sec
3. event: 'SQL*Net message from client'
time waited: 0.000417 sec
wait id: 331 p1: 'driver id'=0x28444553
p2: '#bytes'=0x1
}
and is blocked by
=> Oracle session identified by:
{
instance: 1 (demodb.demodb1)
os id: 220965
process id: 39, oracle@demodb-y9qhg1 (DBW1)
session id: 6874
session serial #: 62636
}
which is on CPU or Wait CPU:
{
last wait: 2 min 28 sec ago
blocking: 86 sessions
current sql_id: 0
wait history:
1. event: 'cell ram cache population'
time waited: 0.000002 sec
wait id: 12835589 p1: 'cellhash#'=0x2899122e
* time between wait #1 and #2: 0.000001 sec
2. event: 'cell ram cache population'
time waited: 0.000005 sec
wait id: 12835588 p1: 'cellhash#'=0xf09df33b
* time between wait #2 and #3: 0.000106 sec
3. event: 'cell ram cache population'
time waited: 0.000002 sec
wait id: 12835587 p1: 'cellhash#'=0x2899122e
}
Chain 1 Signature: 'CPU or Wait CPU'<='log file switch (checkpoint incomplete)'<='buffer busy waits'
Due to the blockage of a critical background process, the DIA0 process initiated a termination of Instance 1 (local instance). The LMHB process (OSPID: 220944) triggered this termination.
2025-05-16T10:41:56.127200+05:30
DIA0 Terminating instance 1 (local instance)
due to a GLOBAL, HIGH confidence Hang with ID=6,
involving 86 total sessions. Ignored count is 0.
Critical background process blocked in a branched chain.
This is a GLOBAL hang due to a session in a branched chain.
Hang Resolution Reason: Automatic hang resolution was performed to free a
significant number of affected sessions.
2025-05-16T10:41:56.594366+05:30
LMHB (ospid: 220944): terminating the instance due to ORA error 484
2025-05-16T10:41:56.595177+05:30
Cause - 'KILL_INSTANCE: GCR has been requested to terminate the instance.
LMHB (ospid: 220944) is terminating the instance.'
During our investigation into the cause of the DBW1 (Database Writer) process hang, we analyzed the associated trace file for OSPID 220965.
The trace content showed that the DBW1 process was waiting on file# 449, rfile# 679, and block# 9—the exact location where we detected corruption. This prevented the process from reading the block, leading to a hung state. As a result, the diagnostic process triggered a database crash.
BH (0x507e6fe4b8) file#: 449 rdba: 0xa9000009 (676/9) class: 12 ba: 0x5079ef4000 set: 259 pool: 3 bsz: 8192 bsi: 0 sflg: 0 pwc: 0,25 dbwrid: 1 obj: -1 objn: -1 tsn: [0/46] afn: 449 hint: f hash: [0x4b8be438a8,0x4b8be438a8] lru: [0x5dbdd805e0,0x583d9dffc0] obj-flags: object_ckpt_list ckptq: [0x557d70c658,0x4b95f41198] fileq: [0x4b95f44cb0,0x4b95f44cb0] objq: [0x4a014535a8,0x4a014535a8] objaq: [0x5dbdd80618,0x583d9dfff8] use: [0x4badc874f0,0x4badc874f0] wait: [NULL] st: XCURRENT md: EXCL fpin: 'ktfbwh40: ktfbbfef' fscn: 0x129b3ca8d5cd tch: 1 le: 0x52ff66edb8 flags: mod_started encrypted_on_disk remote_transfered change state: ACTIVE change count: 1 LRBA: [0x38cb.1cc7d7.0] LSCN: [0x0] HSCN: [0x0] HSUB: [65535] BPS flag not present. BPS: Printing buffer operation history (latest change first): (cnt 2)
Although this analysis allowed us to identify the root cause of the crash, the origin of the block corruption remained unresolved at that stage.
Read on to uncover the hidden culprit behind this mysterious corruption !!!!
Investigate the occurrence of the ORA-00600 error
Following the database crash and subsequent restart, the majority of DML operations began failing due to corruption-related errors. Additionally, each of these corruption error was accompanied by following ORA-00600 internal error.
2025-05-18T09:45:42.311740+05:30
Corrupt Block Found
TIME STAMP (GMT) = 05/18/2025 09:45:41
CONT = 0, TSN = 46, TSNAME = DEMODATA
RFN = 676, BLK = 9, RDBA = 2835349513
OBJN = 4294967295, OBJD = 4294967295, OBJECT = , SUBOBJECT =
SEGMENT OWNER = , SEGMENT TYPE =
Errors in file /u02/app/oracle/diag/rdbms/demodb/DEMODB1/trace/DEMODB1_w001_208668.trc (incident=1259060):
ORA-00600: internal error code, arguments: [ktsBlkChekError], [449], [2835349513], [18018], [], [], [], [], [], [], [],
Incident details in: /u02/app/oracle/diag/rdbms/demodb/DEMODB1/incident/incdir_1259060/DEMODB1_w001_208668_i1259060.trc
Notably, the ORA-00600 errors consistently referenced file# 449 and (rfile#/block#) (679/9), which also caused the initial corruption that led to the database crash.
Doing block recovery for file 449 block 9 Resuming block recovery (PMON) for file 449 block 9 Block recovery from logseq 14762, block 20 to scn 0x0000000000000000 2025-05-18T09:45:45.472744+05:30 Recovery of Online Redo Log: Thread 1 Group 2 Seq 14762 Reading mem 0 Mem# 0: +DATAC1/DEMODB/ONLINELOG/group_2.296.1186763655 Mem# 1: +RECOC1/DEMODB/ONLINELOG/group_2.478.1200854443 Block recovery completed at rba 0.0.0, scn 0x0000129bdf7e449c
This time, the W001 process (Space Management Worker Process) triggered the error. It handles background space management activities like proactive space allocation and space reclamation.
Based on the process’s role, it appears that the error occurred during an attempt to allocate space for a DML operation from file# 449, where it was unable to manage the space correctly, resulting in the ORA-00600.
Upon reviewing the diagnostic incident file generated by the W001 process, it was evident that the affected block (rfile# 679, block# 9) is a metadata block related to the ‘KTFB Bitmapped File Space Bitmap’. We observed similar corruption indicators in the Disk Block Image section of the same tracefile.
Block after image: buffer tsn: 46 rdba: 0xa9000009 (676/9) scn: 0x129b2ba2d521 seq: 0x01 flg: 0x04 tail: 0xd5211e01 frmt: 0x02 chkval: 0x06a4 type: 0x1e=KTFB Bitmapped File Space Bitmap Hex dump of corrupt header 3 = CHKVAL Dump of memory from 0x0000004836D7C000 to 0x0000004836D7C014 4836D7C000 0000A21E A9000009 2BA2D521 0401129B [........!..+....] 4836D7C010 000006A4 [....] Hex dump of corrupt block Disk Block image: Decrypting encrypted buffer before dump. buffer tsn: 46 rdba: 0xa9000009 (676/9) scn: 0x129b2ba2d521 seq: 0x01 flg: 0x04 tail: 0xd5211e01 frmt: 0x02 chkval: 0x06a4 type: 0x1e=KTFB Bitmapped File Space Bitmap Encrypted buffer - dump only metadata Block scn: 0x129b2ba2d521 File Space Bitmap Block: BitMap Control: RelFno: 676, BeginBlock: 3047552, Flag: 0, First: 12272, Free: 395
This strongly indicated block corruption in datafile 449, specifically in a critical metadata block responsible for tracking file space allocation. As a result, any extent allocation involving this file was failing with corruption errors and the associated ORA-00600 [ktsBlkChekError].
Interestingly, we ran both RMAN BACKUP VALIDATE and DBVERIFY, and they reported no corruption—neither in the impacted datafile nor anywhere else in the database.
In conclusion, we identified metadata block corruption in file# 449 block# 9 as the root cause of the ORA-00600 errors, which also seems to have triggered the original database crash.
Yet, the mystery remains—how this corruption found its way into the database is still unknown.
Determine the cause of the ORA-08103 error
ORA-08103 indicates that a SQL statement has accessed a block which no longer belongs to the expected object. This typically occurs when the block’s ownership or structure has changed unexpectedly.
A common cause occurs when the system expects a data block (block type = 6) but encounters a block of a different, invalid, or unexpected type instead. This inconsistency suggests structural corruption or mismatch.
Another possible cause is a mismatch in the data_object_id, which may be changing concurrently for the object being accessed. If such a change happens during SQL execution, it can result in the block appearing as invalid or unrelated to the expected object, triggering ORA-08103.
We referred to the following support document to diagnose the ORA-08103 error.
OERR: ORA-8103 “Object No Longer Exists” Primary Note / Troubleshooting, Diagnostic and Solution (Doc ID 8103.1)
Our diagnostic process began with the use of the assm_tablespace_verify procedure, as recommended in above support document.
This utility is particularly effective for detecting issues with overlapped extents. It shows multiple segments are incorrectly mapped to the same block. Such inconsistencies typically arise from bitmap extent corruption in Locally Managed Tablespaces or from Data Dictionary anomalies.
Given that our analysis of the ORA-00600 error pointed toward a bitmap extent issue, running assm_tablespace_verify was a logical and relevant first step.
However, upon execution, the verification command itself failed with ‘ORA-08103: object no longer exists’ error, further confirming the presence of corruption.
oradebug setmypid
oradebug tracefile_name
/u02/app/oracle/diag/rdbms/demodb/DEMODB1/trace/DEMODB1_ora_384120.trc
execute dbms_space_admin.assm_tablespace_verify('&tablespace_name',dbms_space_admin.TS_VERIFY_BITMAPS);
Enter value for tablespace_name: DEMODATA
BEGIN dbms_space_admin.assm_tablespace_verify('DEMODATA',dbms_space_admin.TS_VERIFY_BITMAPS); END;
*
ERROR at line 1:
ORA-08103: object no longer exists
ORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 362
ORA-06512: at line 1
Upon reviewing the trace file generated by the tablespace verification process, we identified multiple RDBA entries associated with ORA-08103 errors.
RDBA, or Relative Data Block Address, is an internal Oracle identifier that uniquely pinpoints a specific block within a datafile. It serves as a precise locator used by the database engine to read from or write to a particular block.
*** 2025-05-18T10:54:23.743229+05:30 [kteinckf] ORA-8103 unexpected block type:6 (0x6) [kteinckf] ORA-08103 in rdba:0x025b8ea2 *** 2025-05-18T10:54:24.762111+05:30 ORA-08103 - objd: 3469279 objn: -1 tsn: 46 rdba: 0x82f6b53b ORA-08103 - objd: 3463381 objn: -1 tsn: 46 rdba: 0x2ab6dc01 *** 2025-05-18T10:54:26.170554+05:30 ORA-08103 - objd: 3469044 objn: -1 tsn: 46 rdba: 0x6b1741eb ORA-08103 - objd: 3469056 objn: -1 tsn: 46 rdba: 0x818f8173 *** 2025-05-18T10:54:26.938673+05:30 ORA-08103 - objd: 2730681 objn: -1 tsn: 46 rdba: 0xaebbc807 *** 2025-05-18T10:54:28.256175+05:30 ORA-08103 - objd: 2217449 objn: -1 tsn: 46 rdba: 0xed427001 *** 2025-05-18T10:54:31.757656+05:30 [kteinckf] ORA-8103 unexpected block type:6 (0x6) [kteinckf] ORA-08103 in rdba:0x7a020453 *** 2025-05-18T10:54:32.796350+05:30 ORA-08103 - objd: 2887704 objn: -1 tsn: 46 rdba: 0xe983be12 ORA-08103 - objd: 2884373 objn: -1 tsn: 46 rdba: 0xeccace12
To investigate the error, we began by converting the RDBA value to its decimal form. This allowed us to extract the corresponding file# and block# using the following SQL query:
SELECT dbms_utility.data_block_address_file(847318534) AS relative_fno,
dbms_utility.data_block_address_block(847318534) AS block#
FROM dual;
With the obtained file# and block#, we attempted to identify the associated database object by querying the DBA_EXTENTS view:
SELECT owner, segment_name, partition_name, segment_type FROM dba_extents WHERE file_id = 1086 AND block_id <= 1805986 AND block_id + blocks > 1805986;
In our case, the query returned no results, indicating that the block in question was not part of any permanent segment. This typically points to a case of temporary segment corruption.
To confirm this, we queried the DBA_SEGMENTS view for temporary segments in the relevant tablespace:
set line 150 pagesize 400 col owner for a10 col segment_name for a20 col tablespace_name for a20 select owner, segment_name, tablespace_name, header_file, header_block, bytes/1024/1024 as "Size(MB)" from dba_segments where segment_type='TEMPORARY' and tablespace_name = 'DEMODATA' order by 1,2; Below is a sample output from this query, listing the temporary segments found in the DEMODATA tablespace. OWNER SEGMENT_NAME TABLESPACE_NAME HEADER_FILE HEADER_BLOCK Size(MB) ---------- -------------------- -------------------- ----------- ------------ ---------- DEMOUSE 1086.1805986 DEMODATA 1086 1805986 .0625
We performed a cleanup of all corrupted temporary segments using the following DBMS_SPACE_ADMIN procedures:
EXECUTE DBMS_SPACE_ADMIN.SEGMENT_CORRUPT('DEMODATA', &relative_fno>, &block_number);
EXECUTE DBMS_SPACE_ADMIN.SEGMENT_DROP_CORRUPT('DEMODATA', &relative_fno>, &block_number);
Here, we pass the tablespace name (DEMODATA) and derived the relative_fno and block_number values from the decoded RDBA list.
After we completed the cleanup, we successfully dropped all the identified corrupted temporary segments. We then re-executed the ASSM_TABLESPACE_VERIFY procedure. However, this time it failed with the same ORA-00600 error that initially appeared after the database crash.
oradebug setmypid
execute dbms_space_admin.assm_tablespace_verify('DEMODATA',dbms_space_admin.TS_VERIFY_BITMAPS);
BEGIN dbms_space_admin.assm_tablespace_verify('DEMODATA',dbms_space_admin.TS_VERIFY_BITMAPS); END;
*
ERROR at line 1:
ORA-00607: Internal error occurred while making a change to a data block
ORA-00600: internal error code, arguments: [ktsBlkChekError], [449],
[2835349513], [18018], [], [], [], [], [], [], [], []
ORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 362
ORA-06512: at line 1
Interestingly, the issue once again pointed to the exact same datafile and block where we had previously identified corruption.
025-05-18T13:07:17.530473+05:30
Corrupt Block Found
TIME STAMP (GMT) = 05/18/2025 13:07:17
CONT = 0, TSN = 46, TSNAME = DEMODATA
RFN = 676, BLK = 9, RDBA = 2835349513
OBJN = 1, OBJD = 4294967295, OBJECT = , SUBOBJECT =
SEGMENT OWNER = , SEGMENT TYPE =
Eventually, we realized that all three incidents, although occurring at different phases, connected to a single root cause: block-level corruption in that specific datafile.
This correlation strongly indicates the need for deeper analysis of block corruption.
Now…… What next????
Having established that the issues encountered across multiple phases directly resulted from metadata block corruption, we remained unaware of the total number of affected blocks within the database. At this stage, Oracle’s development team recommended validating potential extent overlapping using the SQL statement provided below:
select a.tablespace_name, a.file_id, a.block_id, a.blocks, a.relative_fno,
b.owner, b.segment_name, b.segment_type, b.file_id, b.block_id,
b.blocks, b.relative_fno
from dba_free_space a, dba_extents b
where a.file_id = b.file_id
and a.block_id <= b.block_id+b.blocks-1
and a.block_id+a.blocks-1>=b.block_id
and a.tablespace_name = 'DEMODATA';
Upon executing the SQL, we discovered that multiple application objects had overlapping extents. The results showed that the system had simultaneously marked certain data blocks as free while other objects were actively using them. This overlap of free blocks among different objects contributed to metadata block corruption, preventing the database from executing DML operations on the affected files and blocks.
As a temporary mitigation, we attempted to recreate the listed indexes. We also identified a few affected tables. We considered them for dropping and recreating. However, due to application constraints, it was not feasible to perform drop-and-insert operations for these critical tables. CTAS was also failing with ORA-08103 error. Consequently, we were unable to proceed further with diagnostics, as we lacked a reliable backup and the corruption had spread across many objects. The only viable option was to reinitiate the entire cross-platform migration.
Unfortunately, even though we plan to redo the migration, we still haven’t identified a clear root cause for the metadata block corruption. This uncertainty raised concerns about the potential recurrence of the issue despite reinitiating the entire migration process again.
Alright then… What’s the next step?
As we reviewed the situation, we couldn’t pinpoint exactly when the corruption entered the database. It was unclear if it happened on the day we first saw the error or right after the cross-platform migration. During a detailed team discussion, we recalled taking a valid RMAN backup after completing the migration steps. This was done just before releasing the database for application use.
To dig deeper, we decided to restore this backup on a separate system. This would help us check if the corruption or overlapping extents existed immediately after the migration.
We discovered one of the primary key index (DEMOUSER.PK_1234_OLD) of table (DEMOUSER.ARCH1_DEMO_TABLE) in overlapped extent. This was a same table that underwent a TABLE MOVE COMPRESS operation on the day of the issue. This confirmed that the overlapping extents originated immediately after the cross-platform migration process.
As a result, when we executed the TABLE MOVE operation on the issue day, it deallocated blocks still in use by the overlapping index. Since multiple other objects further referenced the same blocks, the operation overwrote them, leading to metadata block corruption.
We now clearly understand the corruption surfaced because of the operations on the issue day. However, the root cause goes back to the cross-platform migration.
Yet, the most intriguing mystery still lingers—why was this specific index already part of the overlapping extent list immediately after the cross platform migration?
Thrilling final stretch of diagnostics
We coordinated with the application team to identify any recent changes involving the table (ARCH1_DEMO_TABLE) and its index (PK_1234_OLD). The team provided the following summary of actions taken a few days prior to the start of the migration:
- A new table, DEMOUSER.DEMO_TABLE_NEW, was created based on the structure and data of the original DEMO_TABLE using CTAS.
- The primary key constraint on DEMO_TABLE was removed; however, the associated index (PK_1234) was retained.
- The index PK_1234 was subsequently renamed to PK_1234_OLD.
- All other indexes on DEMO_TABLE, apart from PK_1234, were dropped.
- Finally, the DEMO_TABLE was renamed to ARCH1_DEMO_TABLE.
This is the moment we nailed the final root cause — a major breakthrough in our analysis !!!!
So, what led to this????
During the process, we dropped the primary key constraint but deliberately kept its associated index. This approach created an orphaned index and migration import scripts no longer included the constraint’s metadata.
As we proceeded with the import phase of the transportable tablespace (TTS) migration, the system rebuilt the tablespace bitmap without recognizing the orphaned index. Consequently, it mistakenly marked the blocks used by the index as FREE. However, the index still physically existed in the datafiles and the final plug-in process import automatically created it.
This discrepancy led to a block overlap condition. Tablespace bitmap indicated the blocks were free, while they were actually in use by the orphaned index (PK_1234_OLD). This conflict caused ORA-00600 internal corruption errors following a TABLE MOVE COMPRESS operation performed on the ARCH1_DEMO_TABLE on the day of the issue.
Although the commands used were standard, the unintended persistence of the orphan index highlights a limitation in the TTS process that necessitates refinement.
Oracle resolved this issue in the following bug fix:
Bug 33171426 – Possible data corruption after transportable tablespace import due to orphaned indexes
Solution Implemented
And with that, we wrapped up our thrilling journey of uncovering the root cause behind the corruption issues!
Driven by this breakthrough, we proceeded to re-execute the entire cross-platform migration, this time incorporating the critical bug fix along with additional enhancements recommended by Oracle Support and development team.
Following this re-run, no corruption errors were observed, and the application is now performing as expected.

Very well drafted. It’s shows your hunger to find out root cause.
Brilliant, Dinesh. Your ability to connect the dots and a never ending zeal to reach the “root cause” is simply commendable.
This issue brings out an important aspect of XTTS – Freezing application movements during course of migration.
Nonetheless, a superb read, cheers!!