Please note that we are no longer able to offer Carman Scan Software updates.
Most products are noe obsolete, including the Lite, VG64, AT and older models.
MOTOR DATA: CARMAN LATHE: COMMON RAIL TESTER: VEHICLE COVERAGE: DOWNLOADS: PRICE: VERSION UPDATES: MANUFACTURER: CONTACT US: CARMAN Intl presents: DOWNLOADS: Catalogues, Leaflets and Technical Bulletins Download. PDF 5 404 Kb. CARMANSCAN VG64. CARMAN SCAN VG Leaflet PDF 6 647 Kb. CARMAN AUTO-i. CARMAN AUTO-I Catalogues set ZIP 9 625 Kb.
G-Scan Tab Trade In
Trade-in Value of £100. Purchase price only £795.
Trade in your old Carman Scan tester for the new G-Scan Tab.
The G-scan Tab is the latest in a series of high quality diagnostic tools including the G-scan and the G-scan 2. The G-scan Tab builds on the high quality features and applications of the previous products and takes the G-scan Tab to new heights with new features and all at a lower hardware cost.
The G-scan products have led the market in Asian vehicle diagnostics for many years offering OEM software for Kia and Hyundai, as well as outstanding depth and breadth of software in all other manufacturers.
Acceptable Trade-ins Any Carman Scan model, any Launch model, Hanatech, AD200, AD300 or any professional version Autel.
Contact us on 01823 328531 to order and for more details.
You are working with a developer who is reporting slow performance for the following stored procedure call:
You ask what issue the developer is seeing, but the only additional information you hear back is that it is 'running slowly.' So you jump on the SQL Server instance and take a look at the actual execution plan. You do this because you are interested in not only what the execution plan looks like but also what the estimated versus actual number of rows are for the plan:
Looking first just at the plan operators, you can see a few noteworthy details:
There is a warning in the root operator
There is a table scan for both tables referenced at the leaf level (charge_jan and charge_feb) and you wonder why these are both still heaps and don't have clustered indexes
You see that there are only rows flowing through the charge_feb table and not the charge_jan table
You see parallel zones in the plan
As for the warning in the root iterator, you hover over it and see that there are missing index warnings with a recommendation for the following indexes:
You ask the original database developer why there isn't a clustered index, and the reply is 'I don't know.'
Continuing the investigation before making any changes, you look at the Plan Tree tab in SQL Sentry Plan Explorer and you do indeed see that there are significant skews between the estimated versus actual rows for one of the tables:
There seems to be two issues:
An under-estimate for rows in the charge_jan table scan
An over-estimate for rows in the charge_feb table scan
So the cardinality estimates are skewed, and you wonder if this is related to parameter sniffing. You decide to check the parameter compiled value and compare it to the parameter runtime value, which you can see on the Parameters tab:
Indeed there are differences between the runtime value and the compiled value. You copy over the database to a prod-like testing environment and then test execution of the stored procedure with the runtime value of 2/28/2013 first and then 1/31/2013 afterwards.
The 2/28/2013 and 1/31/2013 plans have identical shapes but different actual data flows. The 2/28/2013 plan and cardinality estimates were as follows:
And while the 2/28/2013 plan shows no cardinality estimation issue, the 1/31/2013 plan does:
So the second plan shows the same over and under-estimates, just reversed from the original plan you looked at.
You decide to add the suggested indexes to the prod-like test environment for both the charge_jan and charge_feb tables and see if that helps at all. Executing the stored procedures in January / February order, you see the following new plan shapes and associated cardinality estimates:
The new plan uses an Index Seek operation from each table, but you still see zero rows flowing from one table and not the other, and you still see cardinality estimate skews based on parameter sniffing when the runtime value is in a different month from the compile time value.
Your team has a policy of not adding indexes without proof of sufficient benefit and associated regression testing. You decide, for the time being, to remove the nonclustered indexes you just created. While you don't immediately address the missing clustered index, you decide you'll take care of it later.
At this point you realize you need to look further into the stored procedure definition, which is as follows:
Next you look at the charge_view object definition:
The view references charge data that is separated into different tables by date. And then you wonder if the second query execution plan skew can be prevented through changing the stored procedure definition.
Perhaps if the optimizer knows at runtime what the value is, the cardinality estimate issue will go away and improve overall performance?
You go ahead and redefine the stored procedure call as follows, adding a RECOMPILE hint (knowing that you've also heard that this can increase CPU usage, but since this is a test environment, you feel safe giving it a try):
You then re-execute the stored procedure using the 1/31/2013 value and then the 2/28/2013 value.
The plan shape stays the same, but now the cardinality estimate issue is removed.
The 1/31/2013 cardinality estimate data shows:
And the 2/28/2013 cardinality estimate data shows:
That makes you happy for a moment, but then you realize the duration of the overall query execution seems relatively the same as it was before. You begin to have doubt that the developer will be happy with your results. You've solved the cardinality estimate skew, but without the expected performance boost, you're unsure if you've helped in any meaningful way.
It's at this point that you realize that the query execution plan is just a subset of the information you might need, and so you expand your exploration further by looking at the Table I/O tab. You see the following output for the 1/31/2013 execution:
And for the 2/28/2013 execution you see similar data:
It's at that point that you wonder if the data access operations for both tables are necessary in each plan. If the optimizer knows you only need January rows, why access February at all, and vice versa? You also remember that the query optimizer has no guarantees that there aren't actual rows from the other months in the 'wrong' table unless such guarantees were made explicitly via constraints on the table itself.
You check the table definitions via sp_help for each table and you don't see any constraints defined for either table.
So as a test, you add the following two constraints:
You re-execute the stored procedures and see the following plan shapes and cardinality estimates.
1/31/2013 execution:
2/28/2013 execution:
Looking at Table I/O again, you see the following output for the 1/31/2013 execution:
And for the 2/28/2013 execution you see similar data, but for the charge_feb table:
Remembering that you have the RECOMPILE still in the stored procedure definition, you try removing it and seeing if you see the same effect. After doing this, you see the two-table access return, but with no actual logical reads for the table that has no rows in it (compared to the original plan without the constraints). For example, the 1/31/2013 execution showed the following Table I/O output:
You decide to move forward with load-testing the new CHECK constraints and RECOMPILE solution, removing the table access entirely from the plan (and the associated plan operators). You also prepare yourself for a debate about the clustered index key and a suitable supporting nonclustered index that will accommodate a broader set of workloads that currently access the associated tables.