This is about Oracle database 11g query slow probing.
There was a case about a query took 5 seconds on a 1 million records table.
This looks slow in query performance. There could be many reasons to cause this type of issue, such as disk i/o failure, network latency, or table index.
Let us use table index as an example:
First, we need to check database server's resources, like CPU, memory, or disks. Do we have enough resources when the query is running?
Secondly, we might want to check if the table spaces are running out.
Third, we need to find out which indexes are used when query is running.
stage#1: we might want to try following:
select index_name from user_indexes where table_name ='xxx_DUMP';
alter index my_suspicious_index monitoring usage;
select * from v$object_usage;
select count(*) from xxx_dump order by 1;
This way, we are able to identify the index we are interested are indeed used.
stage#2: validate structure
analyze index my_suspicious_idx validate structure;
select pct_used from index_stats where name='my_suspicious_idx';
We may also apply 'set autotrace' to understand query execution plan. If 'TABLE ACCESS FULL' is found, we might have a clue on how the issue could be resolved quickly. Eventually, we identified the index was the issue, we may want to have the troubled index rebuilt.
To understand why a query took longer time than expected is difficult because there are many reasons behind.
In order to have a full view why this happens, investigation is required, and more time is in need.
In-memory technology might be considered as a short cut in one of Oracle solutions. The idea is simple, which add heavily used tables into the memory to speed up the full table access. It is clear that this solution takes memory, and the In-memory is available in 12C forward.