dbcc cleantable batch size explanationIs there a reliable way to determine when you should run DBCC CLEANTABLE to reclaim space?why SELECT is still using CPU & DiskIO?DBCC CHECKDB ran out of memoryDifference size DBCC Page and DBCC FileheaderHeavy I/O for Microsoft Transaction LogReclaim space from dropped column when there isn't enough space for index rebuildInteresting DBCC CHECKDB scenarioExtracted data WAY bigger than deficit left from deleted rowsWhat can cause DBCC CheckDB to take longer than usual?Reclaim space from dropped column in SQLServer2008R2

Do I have a twin with permutated remainders?

Malformed Address '10.10.21.08/24', must be X.X.X.X/NN or

Intersection point of 2 lines defined by 2 points each

If human space travel is limited by the G force vulnerability, is there a way to counter G forces?

How much of data wrangling is a data scientist's job?

Was any UN Security Council vote triple-vetoed?

How does one intimidate enemies without having the capacity for violence?

Do infinite dimensional systems make sense?

Why doesn't Newton's third law mean a person bounces back to where they started when they hit the ground?

Languages that we cannot (dis)prove to be Context-Free

A case of the sniffles

infared filters v nd

What is the word for reserving something for yourself before others do?

What is a clear way to write a bar that has an extra beat?

Is it inappropriate for a student to attend their mentor's dissertation defense?

I'm flying to France today and my passport expires in less than 2 months

Why can't I see bouncing of a switch on an oscilloscope?

What are these boxed doors outside store fronts in New York?

How is it possible to have an ability score that is less than 3?

Which country benefited the most from UN Security Council vetoes?

Rock identification in KY

Can an x86 CPU running in real mode be considered to be basically an 8086 CPU?

Java Casting: Java 11 throws LambdaConversionException while 1.8 does not

Can I make popcorn with any corn?



dbcc cleantable batch size explanation


Is there a reliable way to determine when you should run DBCC CLEANTABLE to reclaim space?why SELECT is still using CPU & DiskIO?DBCC CHECKDB ran out of memoryDifference size DBCC Page and DBCC FileheaderHeavy I/O for Microsoft Transaction LogReclaim space from dropped column when there isn't enough space for index rebuildInteresting DBCC CHECKDB scenarioExtracted data WAY bigger than deficit left from deleted rowsWhat can cause DBCC CheckDB to take longer than usual?Reclaim space from dropped column in SQLServer2008R2






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








5















I have a very large table with 500 mil rows and a Text column that I will be dropping.
In my Dev environment, I have dropped the column and began the reclaim process, but im not sure what the batch size on the “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 100000)” statement actually does.



I have tried setting it to 5, expecting it to check the first 5 rows and end. “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 5)” and it took 28 hours.
So I restored the db, set it to 100,000 and it took 4 hours



Actual Question:
Does the batch size tell the dbcc cleantable how many rows to do at a time and continuously keep running 100K at a time till it goes thru all 500mil rows?
Or once I run the 100,000 do I have to run it again till I do all 500 mil rows?



On my second test, (running the 100K once) I was able to reclaim 30GB. Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..










share|improve this question






























    5















    I have a very large table with 500 mil rows and a Text column that I will be dropping.
    In my Dev environment, I have dropped the column and began the reclaim process, but im not sure what the batch size on the “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 100000)” statement actually does.



    I have tried setting it to 5, expecting it to check the first 5 rows and end. “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 5)” and it took 28 hours.
    So I restored the db, set it to 100,000 and it took 4 hours



    Actual Question:
    Does the batch size tell the dbcc cleantable how many rows to do at a time and continuously keep running 100K at a time till it goes thru all 500mil rows?
    Or once I run the 100,000 do I have to run it again till I do all 500 mil rows?



    On my second test, (running the 100K once) I was able to reclaim 30GB. Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..










    share|improve this question


























      5












      5








      5








      I have a very large table with 500 mil rows and a Text column that I will be dropping.
      In my Dev environment, I have dropped the column and began the reclaim process, but im not sure what the batch size on the “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 100000)” statement actually does.



      I have tried setting it to 5, expecting it to check the first 5 rows and end. “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 5)” and it took 28 hours.
      So I restored the db, set it to 100,000 and it took 4 hours



      Actual Question:
      Does the batch size tell the dbcc cleantable how many rows to do at a time and continuously keep running 100K at a time till it goes thru all 500mil rows?
      Or once I run the 100,000 do I have to run it again till I do all 500 mil rows?



      On my second test, (running the 100K once) I was able to reclaim 30GB. Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..










      share|improve this question
















      I have a very large table with 500 mil rows and a Text column that I will be dropping.
      In my Dev environment, I have dropped the column and began the reclaim process, but im not sure what the batch size on the “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 100000)” statement actually does.



      I have tried setting it to 5, expecting it to check the first 5 rows and end. “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 5)” and it took 28 hours.
      So I restored the db, set it to 100,000 and it took 4 hours



      Actual Question:
      Does the batch size tell the dbcc cleantable how many rows to do at a time and continuously keep running 100K at a time till it goes thru all 500mil rows?
      Or once I run the 100,000 do I have to run it again till I do all 500 mil rows?



      On my second test, (running the 100K once) I was able to reclaim 30GB. Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..







      sql-server sql-server-2016 dbcc






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited 12 hours ago









      Paul White

      54.1k14287460




      54.1k14287460










      asked 12 hours ago









      TomaszTomasz

      806




      806




















          2 Answers
          2






          active

          oldest

          votes


















          7














          In addition to the great answer by armitage you probably do not need to use DBCC CLEANTABLE in your scenario.



          You state




          Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..




          The best practices in the Microsoft documents says:




          DBCC CLEANTABLE should not be executed as a routine maintenance task. Instead, use DBCC CLEANTABLE after you make significant changes to variable-length columns in a table or indexed view and you need to immediately reclaim the unused space. Alternatively, you can rebuild the indexes on the table or view; however, doing so is a more resource-intensive operation.




          It seems like time and space are your biggest goals. Generally rebuilding an index is quicker (but more resource intensive) than a reorg.



          As you are working on a Development server.



          Just rebuild your indexes and you will get the benefits of the index reorg and the DBCC CLEANTABLE at the same time, and probably much quicker.



          Note Rebuild and Reorganize are not the same thing:



          • Reorganize and Rebuild Indexes (Microsoft)

          • Rebuild or Reorganize: SQL Server Index Maintenance (Brent Ozar)

          • SQLskills SQL101: REBUILD vs. REORGANIZE(Paul Randal)





          share|improve this answer

























          • i thought the same thing and ran the test in reverse. 1) dropped the column 2) defrag all indexes (only reclaimed 30GB) 3) ran cleantable and got 60gb... looks like i need both, this is a one time thing

            – Tomasz
            11 hours ago












          • @Tomasz I edited my answer, not sure what you mean by 'defrag all indexes' but Reorg (what you said in your question) & Rebuild (what I said in this answer) are not the same thing.

            – James Jenkins
            11 hours ago







          • 1





            ah, sorry. i reorganized them each time. i will run one more test where i will drop the column and rebuild the index and share the results. thank you.

            – Tomasz
            11 hours ago


















          3














          According to the Microsoft documentation the Batch Size tells the DBCC CleanTable the number of rows to process per transaction. This relates to the number of rows that the DBCC CleanTable processes internally as the DBCC CleanTable process runs.



          By taking the example in the documentation and modifying to add a million rows and then running the sample script multiple times with varying values for batch size ( see below) it appears that specifying a small batch size increase the execution time as DBCC CleanTable is only operating on the number of rows specified in the batch size.



          • No Batch size specified

          • A batch size of 5

          • A batch size of 100,00





          share|improve this answer























          • So just to confirm, the process will go thru the entire 500Mil rows, just "exclusively locking" 100K at a time and also allow for backup logs to occur.

            – Tomasz
            11 hours ago











          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "182"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f234041%2fdbcc-cleantable-batch-size-explanation%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          7














          In addition to the great answer by armitage you probably do not need to use DBCC CLEANTABLE in your scenario.



          You state




          Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..




          The best practices in the Microsoft documents says:




          DBCC CLEANTABLE should not be executed as a routine maintenance task. Instead, use DBCC CLEANTABLE after you make significant changes to variable-length columns in a table or indexed view and you need to immediately reclaim the unused space. Alternatively, you can rebuild the indexes on the table or view; however, doing so is a more resource-intensive operation.




          It seems like time and space are your biggest goals. Generally rebuilding an index is quicker (but more resource intensive) than a reorg.



          As you are working on a Development server.



          Just rebuild your indexes and you will get the benefits of the index reorg and the DBCC CLEANTABLE at the same time, and probably much quicker.



          Note Rebuild and Reorganize are not the same thing:



          • Reorganize and Rebuild Indexes (Microsoft)

          • Rebuild or Reorganize: SQL Server Index Maintenance (Brent Ozar)

          • SQLskills SQL101: REBUILD vs. REORGANIZE(Paul Randal)





          share|improve this answer

























          • i thought the same thing and ran the test in reverse. 1) dropped the column 2) defrag all indexes (only reclaimed 30GB) 3) ran cleantable and got 60gb... looks like i need both, this is a one time thing

            – Tomasz
            11 hours ago












          • @Tomasz I edited my answer, not sure what you mean by 'defrag all indexes' but Reorg (what you said in your question) & Rebuild (what I said in this answer) are not the same thing.

            – James Jenkins
            11 hours ago







          • 1





            ah, sorry. i reorganized them each time. i will run one more test where i will drop the column and rebuild the index and share the results. thank you.

            – Tomasz
            11 hours ago















          7














          In addition to the great answer by armitage you probably do not need to use DBCC CLEANTABLE in your scenario.



          You state




          Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..




          The best practices in the Microsoft documents says:




          DBCC CLEANTABLE should not be executed as a routine maintenance task. Instead, use DBCC CLEANTABLE after you make significant changes to variable-length columns in a table or indexed view and you need to immediately reclaim the unused space. Alternatively, you can rebuild the indexes on the table or view; however, doing so is a more resource-intensive operation.




          It seems like time and space are your biggest goals. Generally rebuilding an index is quicker (but more resource intensive) than a reorg.



          As you are working on a Development server.



          Just rebuild your indexes and you will get the benefits of the index reorg and the DBCC CLEANTABLE at the same time, and probably much quicker.



          Note Rebuild and Reorganize are not the same thing:



          • Reorganize and Rebuild Indexes (Microsoft)

          • Rebuild or Reorganize: SQL Server Index Maintenance (Brent Ozar)

          • SQLskills SQL101: REBUILD vs. REORGANIZE(Paul Randal)





          share|improve this answer

























          • i thought the same thing and ran the test in reverse. 1) dropped the column 2) defrag all indexes (only reclaimed 30GB) 3) ran cleantable and got 60gb... looks like i need both, this is a one time thing

            – Tomasz
            11 hours ago












          • @Tomasz I edited my answer, not sure what you mean by 'defrag all indexes' but Reorg (what you said in your question) & Rebuild (what I said in this answer) are not the same thing.

            – James Jenkins
            11 hours ago







          • 1





            ah, sorry. i reorganized them each time. i will run one more test where i will drop the column and rebuild the index and share the results. thank you.

            – Tomasz
            11 hours ago













          7












          7








          7







          In addition to the great answer by armitage you probably do not need to use DBCC CLEANTABLE in your scenario.



          You state




          Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..




          The best practices in the Microsoft documents says:




          DBCC CLEANTABLE should not be executed as a routine maintenance task. Instead, use DBCC CLEANTABLE after you make significant changes to variable-length columns in a table or indexed view and you need to immediately reclaim the unused space. Alternatively, you can rebuild the indexes on the table or view; however, doing so is a more resource-intensive operation.




          It seems like time and space are your biggest goals. Generally rebuilding an index is quicker (but more resource intensive) than a reorg.



          As you are working on a Development server.



          Just rebuild your indexes and you will get the benefits of the index reorg and the DBCC CLEANTABLE at the same time, and probably much quicker.



          Note Rebuild and Reorganize are not the same thing:



          • Reorganize and Rebuild Indexes (Microsoft)

          • Rebuild or Reorganize: SQL Server Index Maintenance (Brent Ozar)

          • SQLskills SQL101: REBUILD vs. REORGANIZE(Paul Randal)





          share|improve this answer















          In addition to the great answer by armitage you probably do not need to use DBCC CLEANTABLE in your scenario.



          You state




          Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..




          The best practices in the Microsoft documents says:




          DBCC CLEANTABLE should not be executed as a routine maintenance task. Instead, use DBCC CLEANTABLE after you make significant changes to variable-length columns in a table or indexed view and you need to immediately reclaim the unused space. Alternatively, you can rebuild the indexes on the table or view; however, doing so is a more resource-intensive operation.




          It seems like time and space are your biggest goals. Generally rebuilding an index is quicker (but more resource intensive) than a reorg.



          As you are working on a Development server.



          Just rebuild your indexes and you will get the benefits of the index reorg and the DBCC CLEANTABLE at the same time, and probably much quicker.



          Note Rebuild and Reorganize are not the same thing:



          • Reorganize and Rebuild Indexes (Microsoft)

          • Rebuild or Reorganize: SQL Server Index Maintenance (Brent Ozar)

          • SQLskills SQL101: REBUILD vs. REORGANIZE(Paul Randal)






          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited 10 hours ago

























          answered 11 hours ago









          James JenkinsJames Jenkins

          2,04022045




          2,04022045












          • i thought the same thing and ran the test in reverse. 1) dropped the column 2) defrag all indexes (only reclaimed 30GB) 3) ran cleantable and got 60gb... looks like i need both, this is a one time thing

            – Tomasz
            11 hours ago












          • @Tomasz I edited my answer, not sure what you mean by 'defrag all indexes' but Reorg (what you said in your question) & Rebuild (what I said in this answer) are not the same thing.

            – James Jenkins
            11 hours ago







          • 1





            ah, sorry. i reorganized them each time. i will run one more test where i will drop the column and rebuild the index and share the results. thank you.

            – Tomasz
            11 hours ago

















          • i thought the same thing and ran the test in reverse. 1) dropped the column 2) defrag all indexes (only reclaimed 30GB) 3) ran cleantable and got 60gb... looks like i need both, this is a one time thing

            – Tomasz
            11 hours ago












          • @Tomasz I edited my answer, not sure what you mean by 'defrag all indexes' but Reorg (what you said in your question) & Rebuild (what I said in this answer) are not the same thing.

            – James Jenkins
            11 hours ago







          • 1





            ah, sorry. i reorganized them each time. i will run one more test where i will drop the column and rebuild the index and share the results. thank you.

            – Tomasz
            11 hours ago
















          i thought the same thing and ran the test in reverse. 1) dropped the column 2) defrag all indexes (only reclaimed 30GB) 3) ran cleantable and got 60gb... looks like i need both, this is a one time thing

          – Tomasz
          11 hours ago






          i thought the same thing and ran the test in reverse. 1) dropped the column 2) defrag all indexes (only reclaimed 30GB) 3) ran cleantable and got 60gb... looks like i need both, this is a one time thing

          – Tomasz
          11 hours ago














          @Tomasz I edited my answer, not sure what you mean by 'defrag all indexes' but Reorg (what you said in your question) & Rebuild (what I said in this answer) are not the same thing.

          – James Jenkins
          11 hours ago






          @Tomasz I edited my answer, not sure what you mean by 'defrag all indexes' but Reorg (what you said in your question) & Rebuild (what I said in this answer) are not the same thing.

          – James Jenkins
          11 hours ago





          1




          1





          ah, sorry. i reorganized them each time. i will run one more test where i will drop the column and rebuild the index and share the results. thank you.

          – Tomasz
          11 hours ago





          ah, sorry. i reorganized them each time. i will run one more test where i will drop the column and rebuild the index and share the results. thank you.

          – Tomasz
          11 hours ago













          3














          According to the Microsoft documentation the Batch Size tells the DBCC CleanTable the number of rows to process per transaction. This relates to the number of rows that the DBCC CleanTable processes internally as the DBCC CleanTable process runs.



          By taking the example in the documentation and modifying to add a million rows and then running the sample script multiple times with varying values for batch size ( see below) it appears that specifying a small batch size increase the execution time as DBCC CleanTable is only operating on the number of rows specified in the batch size.



          • No Batch size specified

          • A batch size of 5

          • A batch size of 100,00





          share|improve this answer























          • So just to confirm, the process will go thru the entire 500Mil rows, just "exclusively locking" 100K at a time and also allow for backup logs to occur.

            – Tomasz
            11 hours ago















          3














          According to the Microsoft documentation the Batch Size tells the DBCC CleanTable the number of rows to process per transaction. This relates to the number of rows that the DBCC CleanTable processes internally as the DBCC CleanTable process runs.



          By taking the example in the documentation and modifying to add a million rows and then running the sample script multiple times with varying values for batch size ( see below) it appears that specifying a small batch size increase the execution time as DBCC CleanTable is only operating on the number of rows specified in the batch size.



          • No Batch size specified

          • A batch size of 5

          • A batch size of 100,00





          share|improve this answer























          • So just to confirm, the process will go thru the entire 500Mil rows, just "exclusively locking" 100K at a time and also allow for backup logs to occur.

            – Tomasz
            11 hours ago













          3












          3








          3







          According to the Microsoft documentation the Batch Size tells the DBCC CleanTable the number of rows to process per transaction. This relates to the number of rows that the DBCC CleanTable processes internally as the DBCC CleanTable process runs.



          By taking the example in the documentation and modifying to add a million rows and then running the sample script multiple times with varying values for batch size ( see below) it appears that specifying a small batch size increase the execution time as DBCC CleanTable is only operating on the number of rows specified in the batch size.



          • No Batch size specified

          • A batch size of 5

          • A batch size of 100,00





          share|improve this answer













          According to the Microsoft documentation the Batch Size tells the DBCC CleanTable the number of rows to process per transaction. This relates to the number of rows that the DBCC CleanTable processes internally as the DBCC CleanTable process runs.



          By taking the example in the documentation and modifying to add a million rows and then running the sample script multiple times with varying values for batch size ( see below) it appears that specifying a small batch size increase the execution time as DBCC CleanTable is only operating on the number of rows specified in the batch size.



          • No Batch size specified

          • A batch size of 5

          • A batch size of 100,00






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 12 hours ago









          armitagearmitage

          838512




          838512












          • So just to confirm, the process will go thru the entire 500Mil rows, just "exclusively locking" 100K at a time and also allow for backup logs to occur.

            – Tomasz
            11 hours ago

















          • So just to confirm, the process will go thru the entire 500Mil rows, just "exclusively locking" 100K at a time and also allow for backup logs to occur.

            – Tomasz
            11 hours ago
















          So just to confirm, the process will go thru the entire 500Mil rows, just "exclusively locking" 100K at a time and also allow for backup logs to occur.

          – Tomasz
          11 hours ago





          So just to confirm, the process will go thru the entire 500Mil rows, just "exclusively locking" 100K at a time and also allow for backup logs to occur.

          – Tomasz
          11 hours ago

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Database Administrators Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f234041%2fdbcc-cleantable-batch-size-explanation%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Reverse int within the 32-bit signed integer range: [−2^31, 2^31 − 1]Combining two 32-bit integers into one 64-bit integerDetermine if an int is within rangeLossy packing 32 bit integer to 16 bitComputing the square root of a 64-bit integerKeeping integer addition within boundsSafe multiplication of two 64-bit signed integersLeetcode 10: Regular Expression MatchingSigned integer-to-ascii x86_64 assembler macroReverse the digits of an Integer“Add two numbers given in reverse order from a linked list”

          Category:Fedor von Bock Media in category "Fedor von Bock"Navigation menuUpload mediaISNI: 0000 0000 5511 3417VIAF ID: 24712551GND ID: 119294796Library of Congress authority ID: n96068363BnF ID: 12534305fSUDOC authorities ID: 034604189Open Library ID: OL338253ANKCR AUT ID: jn19990000869National Library of Israel ID: 000514068National Thesaurus for Author Names ID: 341574317ReasonatorScholiaStatistics

          Kiel Indholdsfortegnelse Historie | Transport og færgeforbindelser | Sejlsport og anden sport | Kultur | Kendte personer fra Kiel | Noter | Litteratur | Eksterne henvisninger | Navigationsmenuwww.kiel.de54°19′31″N 10°8′26″Ø / 54.32528°N 10.14056°Ø / 54.32528; 10.14056Oberbürgermeister Dr. Ulf Kämpferwww.statistik-nord.deDen danske Stats StatistikKiels hjemmesiderrrWorldCat312794080n790547494030481-4