By assuming that there are no Hardware issues with SQL Server, but still SQL Server performance is not up to the mark, following query can provide some useful information for troubleshooting.
select a.*, p.query_plan , p.number
from sys.dm_exec_procedure_stats as a
cross apply sys.dm_exec_query_plan(a.plan_handle) as p
There are some very useful columns returned by this query, those are:
cached_time : Recent Recompile Time
last_worker_time : Execution time in micro seconds
last_physical_reads: Number of pages read from physical storage system.
last_logical_reads: Number of pages read from buffer, instead of physical storage.
query_plan: XML representation of query plan, which was used during recent sp run.
The tables in query just keeps the last event i.e. only one record. Adding this query in job allows to store the information in temporary table and after couple of minutes you can get enough details for investigation. For single procedure, more than one query plan may exist. You may compare query plans to find out the differences.
In one of the real time scenarios, I realized that number of logical reads were few millions, for just returning 100 rows. There was issue with one of the join that caused more number of logical reads.
This is not the end, you may also compare physical reads with logical reads. Also consider frequency of procedure being recompiling. Too much recompiling may cause overhead to CPU.
Note: Logical Reads over Physical reads are good. More logical reads shows that the query is getting it’s data from buffer, more physical reads are sign of potential IO bottleneck.