I usually leave it running over night or go watch a movie. I thought to myself I can do that myself. Diskeeper is Defragging software that keeps everything in line. It's a small file with a quick install very easy. I installed it on the PC first since I was very worried on how it would operate on the system. I tested it out on my home pc and the Home Server and found it delivered. Diskeeper will help minimize write and read times for the software and for the hardware it will help lower power consumption, heat output and general wear and tear by having all the data in one spot. At some point, the OS needs to store parts of a file in noncontiguous clusters, which can slow down the speed at which data is accessed, because the HDD must search through different parts of the disk to put together a single file. This occurs naturally when you use a disk frequently, creating, deleting, and modifying files. Its software that will help your system function more efficiently by preventing file fragmentation, which is files divided into pieces scattered around the disk. After cleanups and configuration, we run it on the base image before it is templated.We met the fine people of Diskeeper over at Pepcom "Wine and Dine" where I was introduced to the reps of Diskeeper 2010 with ItelliWrite. (The only time we run defrag is when creaing a new server image from scratch that will be used to deploy multiple VMs. On the flipside, just to give the idea the benefit of a very large doubt.even if there was some performance bump from defragging VMs, is it worth a relatively large i/o spike on the storage system if something like a preventative defrag for all VMs was scheduled? Doesn't make sense to me. In my experience, there's never a magic bullet. If *a* server is having performance problems, we look into it.configs, OS, apps, hardware, etc. Our practical thinking is that we don't generally see the kinds of poor performance and degradation that call for having a "standard operating procedure" to do "preventative maintenance" such as defragging. We have also heard claims (from defrag software vendors) of performance benefits and there is even a "whitepaper" or two floating around with such claims but I don't put much stock in them. We avoid defrag of VMs and have standing requests to have it turned off via GPO for our Windows servers. Just chiming in since this has been a topic in our environment, too. In fact as it'll reallocate all the data, it'll probably have a detrimental effect on the NetApp storage system, the snapshots and any replication you have. Any sort of simple filesystem reallocation on the VM will have little actual benefit. I've asked this question a couple of times of VMware professionals and always got a similar response to the above. You'd need specific use cases to verify the benefit of doing any defragmentation. Running a LUN reallocate directly on the NetApp may have some good benefits to performance however as this will optimise the data layout so that the read patterns can be more efficient and use more contiguous blocks for the corresponding LUN. ![]() A filesystem defrag will have no benefit, and perhaps may further fragment a large database file. ![]() What areas of the filesystem are you looking to benefit? If you're talking Exchange or SQL, then arguable the data wouldn't be in a VMDK, but a database defrag from the application may benefit as it also rebuilds the indexes. The read-ahead algorithms and techniques of NetApp WAFL make the benefit of filesystem defragmentation really minimal. (Andrew miller's response is much better than mine, lol) If you're still worried about fragmentation you should take a look at the netapp reallocate command, and a few of the following links: The simple fact of the matter is that the VM's Filesystem will 99.9999% of the time not correspond to the underlying disk arrangement, you'll simply be wasting I/O and CPU resources running diskkeeper. If you're worried about fragmentation affecting performance, I wouldn't from a VM level. You also have dedupe to consider, if you run a defrag on your VM disks, then all the data that diskkeeper moves around will become re-duplicated until the dedupe process runs (which will essentially undefragment the data). It essentially writes all data to the end of the aggregate, including rewritten data, even after running the 'defrag' utility the data on the physical disks would still be "fragmented". ![]() WAFL (the underlying Netapp Filesystem) does not arrange data sequentially on disks like most filesystems. I would say this is a *bad* idea if you're using a netapp filer as storage for your VMs.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |