I recently wrote a white paper for Infinio on this topic and am also doing a webinar that is based on the white paper. If you don’t know about Infinio you should definitely give them a look as they have a clever solution for solving storage performance challenges in vSphere environments. The white paper should be available very soon on Infinio’s website and you can sign-up now for the webinar on 12/18. I look forward to seeing you there and I’ll also being doing a more detailed post on their solution soon.
What techie geek doesn’t love Doctor Who, I’ve been watching it since Tom Baker was the Doctor. Well if you do as well check out this latest promotion from Unitrends which caught my eye. Looks like the folks at Unitrends went on a shopping spree at ThinkGeek and they’re giving away some cool Doctor Who gadgets for simply downloading a free trial of Unitrends. I’ll readily admit that I own 2 sonic screwdrivers myself, the 10th Doctor’s and the 11th Doctor’s, mine isn’t a remote though which would be a nice upgrade. So head on over to Unitrends website and sign-up and hopefully you’ll win some cool Doctor Who swag to get prepared for the upcoming 50th anniversary show.
This year the top VMware & virtualization blogger voting will be bigger and better thanks to Veeam. We’ll be doing random prize giveaways both for bloggers that make it into the top 50 and for the voters as well. I’ll randomly pick 3 blogger names and 4 voter names to win prizes which include a Mac Mini, iPad Mini, HP MicroServer, Beats headphones, Roku and a Wii U.
I’ll kick it off in December with a call for nominations for the specific blogger categories and the voting will open up in January. Every year I get some blogs that complain that they were not included in the voting. I make every effort to add any blog that I might notice that isn’t listed but I can’t catch them all. So to make sure your blog is included in the voting form it must be listed on my vLaunchPad site. Please go there and check and if it isn’t please use this form to let me know. Please include your name, blog URL, RSS feed URL and twitter URL. It may take me a bit to get your blog added, I only update it every few weeks, so keep checking back to see if it is listed.
So bloggers if you haven’t blogged much this year you better get blogging as this is an opportunity for you to not only get some great recognition for all your hard work but also win some great prizes. Special thanks to Veeam for making this possible and if you’d like to show your thanks as well click on the below link and check out their great products.
There has been so many documents, white papers, videos and blog posts posted about the vSphere 5.5 release that it’s hard to keep up with them all. I have almost 500 links gathered in my vSphere 5.5 Link-O-Rama and it is still growing. With so many links it’s easy to miss some of the really good ones so I thought I would put together a top 10 list that highlights the ones that you don’t want to miss.
1 - The official VMware What’s New in vSphere 5.5 white paper series
Traditionally VMware releases a slew of What’s New white papers to support a new vSphere release that cover specific areas (i.e. storage, platform, networking, etc.). These white papers cover the new features and enhancements in a lot more detail than their standard one page overview document that covers them at a high level. With the vSphere 5.5 release they only released one initially (platform), then followed up with two more that cover specific storage features. Despite not having as many white papers for this release you should still give these a read as they provide good in depth information written by the VMware technical experts that will help you better understand the changes and new things in vSphere 5.5:
- What’s New in VMware vSphere 5.5 Platform (VMware Tech Paper)
- What’s New in VMware vSphere Flash Read Cache (VMware Tech Paper)
- What’s New in VMware Virtual SAN (VSAN) (VMware Tech Paper)
2 - Duncan’s VSAN & vFRC posts
Despite being released as a beta feature, Virtual SAN (VSAN) is still one of the most popular features in vSphere 5.5. VSAN has attracted a lot of interest and people are hungry to know everything they can about it. Well Duncan Epping over at Yellow Bricks has written 14 blog posts on VSAN to help feed that appetite. Another great new storage feature (that you can use now) in vSphere 5.5 is vSphere Flash Read Cache (vFRC), Duncan has also written some must read posts on that as well. So if you want to benefit from Duncan’s inside knowledge and experience with VSAN & vFRC make sure you follow the Yellow Brick road and give these a read:
- Introduction to VMware vSphere Virtual SAN (Yellow Bricks)
- Frequently asked questions about Virtual SAN / VSAN (Yellow Bricks)
- Virtual SAN news flash pt 1 (Yellow Bricks)
- Testing vSphere Virtual SAN in your virtual lab with vSphere 5.5 (Yellow Bricks)
- How do you know where an object is located with Virtual SAN? (Yellow Bricks)
- How VSAN handles a disk or host failure (Yellow Bricks)
- Virtual SAN and Data Locality/Gravity (Yellow Bricks)
- Isolation / Partition scenario with VSAN cluster, how is this handled? (Yellow Bricks)
- Initialized disks to be used by VSAN task completed successfully, but no disks added? (Yellow Bricks)
- I created a folder on my VSAN datastore, but how do I delete it? (Yellow Bricks)
- Be careful when defining a VM storage policy for VSAN (Yellow Bricks)
- Designing your hardware for Virtual SAN (Yellow Bricks)
- Pretty pictures Friday, the VSAN edition… (Yellow Bricks)
- VMware vSphere Virtual SAN design considerations… (Yellow Bricks)
- Introduction to vSphere Flash Read Cache aka vFlash (Yellow Bricks)
- Frequently asked questions about vSphere Flash Read Cache (Yellow Bricks)
- Something to know about vSphere Flash Read Cache (Yellow Bricks)
- vSphere Flash Read Cache and esxcli (Yellow Bricks)
3 - Chris Wahl’s vSphere 5.5 Improvements series
Chris Wahl has been rising fast in the blogosphere rankings (currently #12) by creating tons of great high quality content and he doesn’t disappoint when it comes to writing about vSphere 5.5. Chris has written a nice detailed multi-part series that covers many different topics in vSphere 5.5 and will give you a great overview of the many improvements in vSphere 5.5:
- vSphere 5.5 Improvements Part 1 - The New Hotness in ESXi 5.5 (Wahl Network)
- vSphere 5.5 Improvements Part 2 - Pushing For A Software-Defined Data Center (SDDC) (Wahl Network)
- vSphere 5.5 Improvements Part 3 – Lions, Tigers, and 62TB VMDKs (Wahl Network)
- vSphere 5.5 Improvements Part 4 - Virtual SAN (VSAN) (Wahl Network)
- vSphere 5.5 Improvements Part 5 - vSphere Flash Read Cache (vFlash) (Wahl Network)
- vSphere 5.5 Improvements Part 6 - Site Recovery Manager (SRM) and vSphere Replication(Wahl Network)
- vSphere 5.5 Improvements Part 7 – Single Sign On Completely Redesigned (Wahl Network)
- vSphere 5.5 Improvements Part 8 – Network Virtualization with NSX (Wahl Network)
- vSphere 5.5 Improvements Part 9 - Networking and VDS Razzle Dazzle (Wahl Network)
4 - Derek Seaman’s Installing vSphere 5.5 series
Derek Seaman is another blogger that has been quickly climbing the blogging ladder and made it into the top 25 last year. Derek appears to be trying to out do himself from last years epic 13-part blog series on installing vSphere 5.1 with a new 16 part (and counting?) series on installing vSphere 5.5. This series will guide you through all the different steps of installing vCenter Server and it’s various components and help you avoid any gotchas that you might encounter. Derek spends a lot of time covering SSL & SSO which can always be very challenging and frustrating to implement properly so I highly recommend you give it a read before trying to install or upgrade to vSphere 5.5 yourself:
- vSphere 5.5 Install Pt. 1: Introduction (Derek Seaman)
- vSphere 5.5 Install Pt. 2: SSO Reborn (Derek Seaman)
- vSphere 5.5 Install Pt. 3: Upgrading vCenter (Derek Seaman)
- vSphere 5.5 Install Pt. 4: ESXi 5.5 Upgrade (Derek Seaman)
- vSphere 5.5 Install Pt. 5: SSL Deep Dive (Derek Seaman)
- vSphere 5.5 Install Pt. 6: Certificate Template (Derek Seaman)
- vSphere 5.5 Install Pt. 7: Install SSO (Derek Seaman)
- vSphere 5.5 Install Pt. 8: Online SSL Minting (Derek Seaman)
- vSphere 5.5 Install Pt. 9: Offline SSL Minting (Derek Seaman)
- vSphere 5.5 Install Pt. 10: Replace SSO Certs (Derek Seaman)
- vSphere 5.5 Install Pt. 11: Install Web Client (Derek Seaman)
- vSphere 5.5 Install Pt. 12: Configure SSO (Derek Seaman)
- vSphere 5.5 Install Pt. 13: Install Inventory Svc (Derek Seaman)
- vSphere 5.5 Install Pt. 14: Create Databases (Derek Seaman)
- vSphere 5.5 Install Pt. 15: Install vCenter (Derek Seaman)
- vSphere 5.5 Install Pt. 16: vCenter SSL (Derek Seaman)
5 - Cormac Hogan’s VSAN series
If you don’t know Cormac Hogan, you should, he’s a senior technical marketing architect at VMware and his focus is on storage which he knows a lot about. He produces a lot of great content for VMware on the vSphere Blog and also writes on his own personal blog as well which you should definitely bookmark. With each new vSphere release Cormac does a series focused on what’s new with storage and he’s back at it again with vSphere 5.5 with a multi-part series on VSAN. Cormac also started a series on what’s new in storage in vSphere 5.5, but has only posted part 1 so hopefully he continues it. By the time you’ve finished reading through his 10 blog posts on VSAN you’ll probably know as much about it as Cormac does, well probably not but you’ll still learn a lot.
- VSAN Part 1 - A first look at VSAN (Cormac Hogan)
- VSAN Part 2 - What do you need to get started? (Cormac Hogan)
- VSAN Part 3 - It is not a Virtual Storage Appliance (Cormac Hogan)
- VSAN Part 4 – Understanding Objects and Components (Cormac Hogan)
- VSAN Part 5 - The role of VASA (Cormac Hogan)
- VSAN Part 6 - Manual or Automatic Mode (Cormac Hogan)
- VSAN Part 7 - Capabilities and VM Storage Policies (Cormac Hogan)
- VSAN Part 8 – The role of the SSD (Cormac Hogan)
- VSAN Part 9 - Host Failure Scenarios & vSphere HA Interop (Cormac Hogan)
- VSAN Part 10 - Changing VM Storage Policy on-the-fly (Cormac Hogan)
6 - VMware’s Performance Best Practices for vSphere 5.5 white paper
Who doesn’t want to optimize their vSphere environment to gain the best performance possible. Well VMware has published a white paper on performance best practices for vSphere 5.5 that is full of great advice that will help ensure that your vSphere environment is running at top speed. This 90 page white paper is a collection of best practices that covers all parts of vSphere from hosts, to VM’s to vCenter Server and is a definite must read even if you think you know it all.
For example did you know that Windows guest operating systems poll optical drives (that is, CD or DVD drives) quite frequently and that unused or unnecessary virtual hardware devices can impact performance and should be disabled? Or did you know that the more vCPUs a virtual machine has, the more interrupts it requires and that delivering many virtual timer interrupts negatively impacts virtual machine performance and increases host CPU consumption? These are all things you should know if you want to keep your application owners happy, so give this paper a read, I guarantee you will learn something from it.
- Performance Best Practices for VMware vSphere 5.5 (VMware Tech Paper)
7 - Michael Webster & Jason Boche’s Storage Deep Dives
I love deep dives that are full of lots of technical details that can help you really gain a better understanding of a particular topic or feature. Michael Webster has a published a great deep dive on the new larger virtual disk size that is a very good read with a lot of great information and considerations for pumping up your virtual disk size. Jason Boche has published a great deep dive on the details of the changes to the UNMAP command in vSphere 5.5 including how to use it and the performance impacts. So if you want to take a deep dive into the pool of storage knowledge get your swimming trunks on and dive in.
- vSphere 5.5 Jumbo VMDK Deep Dive (Long White Clouds)
- vSphere 5.5 UNMAP Deep Dive (VMware vEvangelist)
8 - VMware licensing
Another year, another vSphere release, another licensing change, another headache. What you know about VMware’s licensing has probably all changed so it’s time to study up on it again and figure it all out. Be sure and hurry up and learn it before it all changes again.
9 - RTFM
I shouldn’t have to tell you this but I will, reading the fricking manuals can really be helpful. I know most of us don’t like to read manuals and just want to dive into playing with the products but VMware actually makes some really good documentation that is more than just your typical step-by-step instructions. I think the document I always read first for every new vSphere release is the Maximum Configuration doc to see what has changed with scalability, you can read my write up on the changes from vSphere 5.1 to vSphere 5.5. I highly encourage you to checkout the separate documentation on Networking, Storage, Security, Availability and Resource Management. These are great guides for learning about the technology and getting some deep dive information on it.
VMware even makes it easier for you by putting it in multiple formats such as html, pdf, epub and mobi so you can download it to your device of choice and carry it around with you. Maybe some day they’ll even put it in audio book format so you can listen to James Earl Jones tell you how to configure Storage DRS while driving to work, how cool would that be. Also don’t forget to read the release notes as well, you can often find some great nuggets in there. Don’t have time for it, don’t worry, Maish over at Technodrone has done it for you and provided a great write-up on what he found.
- VMware vSphere 5.5 Release Notes (vmware.com)
- vSphere 5.5 Product Documentation — PDF and E-book Formats (vmware.com)
- vSphere 5.5 Configuration Maximums (vmware.com)
- vSphere Installation and Setup (vmware.com)
- vSphere Upgrade (vmware.com)
- vSphere vCenter Server and Host Management (vmware.com)
- vSphere Virtual Machine Administration (vmware.com)
- vSphere Host Profiles (vmware.com)
- vSphere Networking (vmware.com)
- vSphere Storage (vmware.com)
- vSphere Security (vmware.com)
- vSphere Resource Management (vmware.com)
- vSphere Availability (vmware.com)
- vSphere Monitoring and Performance (vmware.com)
- vSphere Single Host Management (vmware.com)
- vSphere Troubleshooting (vmware.com)
10 - VMware Knowledgebase Articles
The VMware Knowledgebase has more than just solutions to problems, it also has a lot of great information and how-to articles as well. Quite literally the VMware KB is a fountain of information that contains dozens of great informative articles specific to vSphere 5.5. This includes articles that will help you with upgrading and installing vSphere 5.5 as well as tons of great tips, gotchas and solutions to issues. So before you even touch vSphere 5.5 save yourself some frustration by reading through the VMware KB and I guarantee your journey to vSphere 5.5 will be much smoother.
- vSphere 5.5 is here! – KBs you need to know about (VMware Support Insider Blog)
So there you have it, the top 10 things you should read about vSphere 5.5, I’m sure I missed some other great ones as well so feel free to shout out in the comments some additional links that you feel people must read. Also be sure and bookmark my vSphere 5.5 Link-O-Rama, new links are added daily and you will find almost everything you need there to get you going with vSphere 5.5.
VMware continues to publicly publish sessions from VMworld 2013 so even if you did not attend you can still access a lot of great sessions. This list will be updated as any other new session recordings are released:
- Monday General Session: VMworld 2013 San Francisco - Pat Gelsinger
- Tuesday General Session: VMworld 2013 San Francisco - Carl Eschenbach, Principal Engineer Kit Colbert and EMEA CTO Joe Baguley
- Tuesday General Session: VMworld Europe 2013 - Pat Gelsinger
- Wednesday General Session: VMworld 2013 Europe - Carl Eschenbach, Principal Engineer Kit Colbert and EMEA CTO Joe Baguley
Track: Business Continuity
- BCO4872 - Operating and Architecting a vSphere Metro Storage Cluster Based Infrastructure - Lee Dilworth (VMware), Duncan Epping (VMware)
- BCO5041 - vSphere Data Protection - What’s New and Technical Walkthrough - Jeff Hunter (VMware)
- BCO5065 - VMware vSphere Fault Tolerance for Multiprocessor Virtual Machines - Technical Preview - Jim Chow (VMware), Wei Xu (VMware)
- BCO5129 - Protection for All - vSphere Replication & SRM Technical Update - Lee Dilworth (VMware), Ken Werneburg (VMware)
Track: End user Computing
- EUC5291 - Horizon View Troubleshooting: Looking under the Hood - Matt Coppinger (VMware), Jack McMichael (VMware)
- EUC7370-S - The Software-Defined Data Center Meets End User Computer - Scott Davis (VMware), Frank Nydam (VMware), Mike Coleman (VMware)
- NET5847 - NSX: Introducing the World to VMware NSX - Milin Desai (VMware), Sachin Thakar (VMware)
- NET7388-S - Network Virtualization: Moving Beyond the Obvious - Martin Casado (VMware)
Track: Operations Transformation
- OPT5194 - Moving Enterprise Application Dev/Test to VMware’s Internal Private Cloud- Operations Transformation - Kurt Milne (VMware), Venkat Gopalakrishn (VMware)
Track: Public & Hybrid Cloud
- PHC4783 - How To Build Your Hybrid Cloud and Consume the Public Cloud - Chris Colotti (VMware), Michael Roy (VMware)
- PHC5605-S - Everything You Want to Know About vCloud Hybrid Service - But Were Afraid to Ask - Mathew Lodge (VMware), Christopher Rence (Digital River, Inc.)
Track: Security & Compliance
- SEC5893 - Changing the Economics of Firewall Services in the Software-Defined Center – VMware NSX Distributed Firewall - Srinivas Nimmagadda (VMware), Anirban Sengupta (VMware)
- STO5391 - VMware Virtual SAN - Christos Karamanolis (VMware), Vijay Ramachandran (VMware)
- STO5715-S - Software-defined Storage - The Next Phase in the Evolution of Enterprise Storage - Vijay Ramachandran (VMware), Alberto Farronato (VMware)
Track: Virtualizing Applications
- VAPP4679 - Software-Defined Datacenter Design Panel for Monster VM’s: Taking the Technology to the Limits for High Utilisation, High Performance Workloads - Frank Dennemean (PernixData), Andrew Mitchell (VMware), Mark Achtemichuck (VMware), Mostafa Khalil (VMware), Michael Webster (VMware)
Track: Virtualization & Cloud Management
- VCM7369-S - Uncovering the Hidden Truth in Log Data With vCenter Log Insight - Tim Russell (NetApp), Manesh Kumar, Jon Herlocker (VMware)
Track: vSphere & Cloud Suite
- VSVC4605 - What’s New in VMware vSphere? - Michael Adams (VMware)
- VSVC4830 - vCenter Deep Dive - Ameet Jani (VMware), Justin King (VMware)
- VSVC4944 - PowerCLI Best Practices - A Deep Dive - Luc Dekens (Eurocontrol), Alan Renouf (VMware)
- VSVC5005 - What’s New in vSphere Platform & Storage - Kyle Gleed (VMware), Cormac Hogan (VMware)
- VSVC5690 - vSphere Upgrade Series Part 1: vCenter Server - Josh Gray (VMware), Justin king (VMware)
- VSVC5821 - Performance and Capacity Management of DRS Clusters - Ganesha Shanmuganathan (VMware), Anne Holler (VMware)
VMware originally posted session replays of the top 10 sessions at VMworld 2013 for anyone (even non-attendees) to watch. Now they have posted 4 new VMworld 2013 session on VMware TV:
- STO5391 - VMware Virtual SAN - Christos Karamanolis (VMware), Vijay Ramachandran (VMware)
- BCO4872 - Operating and Architecting a vSphere Metro Storage Cluster Based Infrastructure - Lee Dilworth (VMware), Duncan Epping (VMware)
- VSVC5005 - What’s New in vSphere Platform & Storage - Kyle Gleed (VMware), Cormac Hogan (VMware)
- PHC4783 - How To Build Your Hybrid Cloud and Consume the Public Cloud - Chris Colotti (VMware), Michael Roy (VMware)
Definitely check out these 4 new ones, I attended the vMSC session and it was a good one and looking at the speakers for the other sessions I can bet they are good as well.
The original top 10 sessions that were published are listed below and can be viewed here:
- VSVC4944 - PowerCLI Best Practices - A Deep Dive
- BCO5129 - Protection for All - vSphere Replication & SRM Technical Update
- STO5715-S - Software-defined Storage - The Next Phase in the Evolution of Enterprise Storage
- PHC5605-S - Everything You Want to Know About vCloud Hybrid Service - But Were Afraid to Ask
- NET5847 - NSX: Introducing the World to VMware NSX
- VCM7369-S - Uncovering the Hidden Truth in Log Data With vCenter Log Insight
- VAPP4679 - Software-Defined Datacenter Design Panel for Monster VM’s: Taking the Technology to the Limits for High Utilisation, High Performance Workloads
- EUC7370-S - The Software-Defined Data Center Meets End User Computer
- OPT5194 - Moving Enterprise Application Dev/Test to VMware’s Internal Private Cloud- Operations Transformation
- SEC5893 - Changing the Economics of Firewall Services in the Software-Defined Center – VMware NSX Distributed Firewall
The August/September timeframe has become like Christmas for vSphere geeks as the anxiously awaited new release of vSphere arrives which they finally get to unwrap and play with. VMware released vSphere 5.5 on September 22nd this year, just one year after the last major vSphere 5.1 release. Overall vSphere 5.5 is a bit light on the number of new features/enhancements compared to previous releases and is also missing the long awaited new Virtual Volumes (vVols) storage architecture that VMware has been showing off for a while now.
Despite that there is still plenty of new stuff in vSphere 5.5 that make it a worthy upgrade. Typically each new release has some superstar new features that get a lot of attention along with lots of smaller enhancements and features that often get overlooked. In this post I thought I’d highlight a few of the big new features and also a few of the smaller ones that often go un-noticed.
1 - Scalability
Scalability in vSphere is important as it dictates the size and the amount of workloads that can run on a host. By steadily increasing scalability VMware has made it so almost any size workload can be virtualized and VM density can grow higher. On the VM side, the Monster VM has steadily grown quite large and able to tackle any workload, however its one weakness has always been the virtual disk size which has been limited to 2TB in past releases. That’s finally changed with vSphere 5.5 as the maximum virtual disk size has jumped to a whopping 62TB.
While the VM side got more disk, on the host size the increases were focused on compute resources. The maximum number of physical and virtual CPUs per hosts doubled to 320 pCPUs and 4096 vCPUs while the maximum physical memory doubled to 4TB. This greatly increases the VM density that you can achieve as it allows you to pack more VMs onto a host. While the CPU limits are so high that most people will never even get close to reaching them, the memory increases are definitely welcome as many applications running on VMs tend to be very memory hungry.
One other nice scalability jump was with the vCenter Server Appliance which is a pre-built virtual appliance complete with OS, database and the vCenter Server software installed. The big advantage of the using the vCSA is it’s simple to install and setup which made it very convenient especially for users that lacked database experience. The problem in vSphere 5.1 was that it was very limited in scalability and would only support the smallest of environments up to 5 hosts and 50 VMs. That’s all changed in vSphere 5.5 as it now scales to 100 hosts and 3000 VMs, a huge jump which it will make it attractive alternative to a much wider group. If you want to find out more about the scalability changes between vSphere releases check out this post and this post.
2 - Virtual SAN
VMware’s new Virtual SAN (VSAN), not to be confused with their existing VSA offering, is their latest product as VMware continues to try and become a storage vendor. The big difference between VMware’s VSA & VSAN is that VSAN is not a virtual appliance, it is baked into the hypervisor and VSAN also scales much higher than VSA which was limited to 3 nodes. VSAN also requires both SSD and traditional spinning disk as it utilizes the SSD tier as both a read cache and write buffer to complement the traditional spinning disk tier.
While VSAN was released as part of vSphere 5.5, it’s not quite ready yet and is only available in beta form.
You can sign up for the public beta here, note there is nothing to download as its native to vSphere 5.5, you just need a license key to activate it. Before you jump in and start using it you should be aware that it currently has limited hardware support and it has some known issues. But it’s beta so you should expect that and shouldn’t be using it in production anyway. That shouldn’t stop you from giving it a try though as long as you meet the requirements so you can get a look at what’s coming. And also the correct acronym is VSAN not vSAN, you have to love VMware’s ever changing letter case usage. If you want to know more about VSAN I have a huge collection of links on it.
3 - UNMAP
UNMAP is a SCSI command (not a vSphere feature) that is used with thin provisioned storage arrays as a way to reclaim space from disk blocks that have been written to after the data that resides on those disk blocks has been deleted. UNMAP serves as the mechanism that is used by the Space Reclamation feature in vSphere to reclaim space from VMs that have been deleted or moved to another datastore. This process allows thin provisioning to clean-up after itself and greatly increases the value and effectiveness of thin provisioning.
Support for UNMAP was first introduced in vSphere 5 and it was initially intended to be an automatic (synchronous) reclamation process. However issues with storage operations timing out while vSphere waited for the process to complete on some arrays caused VMware to change it to a manual (asynchronous) process that does not work in real time. A parameter was added to the vmkfstools CLI utility that would create a balloon file and delete it and during the process UNMAP all disk blocks to reclaim space. The problem with this was you had to constantly run it manually, it was resource intensive and not very efficient as it tried to reclaim blocks that may not have data written to them yet.
In vSphere 5.5 it’s still not an automatic process unfortunately but VMware has improved the manual process. To initiate an UNMAP operation you now use the “esxcli” command using the “storage vmfs unmap” parameter, you can pass it some additional parameters to specify a VMFS volume label/uuid and the number of blocks to reclaim (default is 200). In addition UNMAP is now much more efficient and the run duration is greatly reduced and the reclaim efficiency is increased. As a result where VMware previously recommended only running it off-hours so it wouldn’t impact VM workloads, you can now run it anytime and it will have minimal impact.
To see if your storage device supports UNMAP you can run the “esxcli storage core device vaai status get -d” command and the Delete Status will say supported if it does, you can also check the vSphere HCL to see if its supported and what firmware version may be required. To find out more about the changes check out this post by Jason Boche and if you went to VMworld be sure and check out session STO4907 - Capacity Jail Break: vSphere 5 reclamation nuts and bolts.
4 – CPU C-states
vSphere can help reduce server power consumption during periods of low resource usage by throttling the physical CPUs. It can accomplish this in one of 2 ways, by throttling the frequency and voltage of a CPU core or by completely shutting down a CPU core. This is referred to as P-states and C-states which are defined as follows:
- A P-state is an operational state that can alter the frequency and voltage of a CPU core from a low state (P-min) to the max state (P-max), this can help save power for workloads that do not require a CPU core full frequency.
- A C-state is an idle state that shuts down a whole CPU core so it cannot be used, this is done during periods of low-activity and saves more power than simply lowering the CPU core frequency.
Why would you want to use this feature, because it can save you money, especially in larger environments. It’s the equivalent to staffing a restaurant; do you want your full staff there standing around getting paid while doing nothing during off-peak periods? Of course not, just like you don’t want all your CPU cores powered on when you don’t need them, it wastes money.
Support for CPU P-states & C-states was first introduced in vSphere 4, but the balanced (between power/performance) power policy only supported P-states. You could use C-states as well but you had to create a custom policy for them. Now in vSphere 5.5 the balanced power policy supports both P-states and C-states to be able to achieve the best possible power savings. So now while your VMs are all tucked in bed and resting at night you can keep a green data center and save some cash. You can read more about power management in vSphere in this white paper.
5 – vSphere Flash Read Cache
vSphere has had a caching feature called Content Based Read Cache that was introduced in vSphere 5.0 which allowed you to allocate server RAM to be used as a disk read cache. Unfortunately this feature was only intended to be used by VMware View to help eliminate some of the unique I/O patterns in VDI environments such as boot storms. With vSphere 5.5 VMware has a new host-based caching mechanism called vSphere Flash Read Cache (formerly known as vFlash) that leverages local SSD disks as a cache mechanism.
As the name implies vFRC is a read cache only that is placed directly in front of a VMs virtual disk data path. It can be enabled on a per VM/virtual disk basis and is transparent to the guest OS and applications running in the VM. While caching is configured per host, you can optionally set up vFRC to migrate the cache contents to another host to follow a VM. Its primary benefit is for workloads that are very read intensive (i.e. VDI) but by offloading reads to cache it can indirectly benefit write performance as well.
Another component of vFRC is Virtual Host Flash Swap Cache which is simply the old Swap To SSD feature introduced in vSphere 5.0 that allowed you to automatically use SSDs to host VM desk swap files to support memory over-commitment. To find out more about vFlash you can check out the many links I have here and also check out this VMware white paper. Duncan also has a real good FAQ on it here.
As VMware continues to add more features, the management challenge of keeping up with the changes to the environment gets more difficult. SolarWind’s Virtualization Manager provides a powerful and affordable solution that takes the complexity out of the managing VMware. If you’d like to learn more or download a free trial, click on the banner below.
I recently read an article on ZDNet about data protection in virtual environments that made the following statement:
“According to a survey conducted by data recovery vendor Kroll Ontrack, 80 percent of respondents did not believe they were at risk or believed they would reduce the risk of data loss when they stored data in a virtual environment.”
Did I read that right?
80 percent of 724 people that were polled at VMworld think virtualization reduces the risk of data loss?
I’d like to know who these people are and why they feel that way. Virtualization brings a lot of benefits to the data center but magically protecting data isn’t one of them that I know of. If anything you could argue that virtualization increases the risk of data loss as storage becomes a single point of failure and when failures occur they have big impacts. If you look at a traditional data center, your servers and storage are widely distributed, you have a lot of individual physical servers that run a single application which typically store data on the local hard disk of the server. Sure you might have a SAN that you use for storing user data and databases, but a big part of your applications and data is scattered across many servers. If you have a failure on a single server it only impacts that server and not the rest of your environment.
With virtualization you move to a centralized storage model for everything, you still have physical servers that run your VMs but most of them are all stored on a SAN that serves the whole virtual environment. With this model, a failure of a physical server in a virtual environment is typically no big deal as none of your VM data is stored on the physical server it all resides on the SAN. When a host fails you may lose a tiny bit of data from any application that are running and haven’t written data to disk yet, but the VM starts right up on another host and continues where it left off. Now if your storage fails you’re going to be in a world of hurt, those VMware features like HA & FT only protect against host failures, when your storage fails, all those VMs that reside on them go down.
You hear the term all your eggs in one basket when it comes to hosts as they run multiple VMs on them, but at least you have a lot of baskets in your virtual environment. When it comes to storage you truly have all your eggs in one basket as a single storage array services many hosts. So when storage fails, it has a huge impact and greatly amplifies the risk of data loss both in the short term and longer term. Think about it, if I have 200 VMs running on a storage array and it goes down, that’s 200 applications that suddenly had the lights turned out on them and whatever they were doing at the time you lost. Now think about if you had a catastrophic storage failure and you had to recover from the previous nights backup for your whole environment, multiple that times 200 VMs and that’s a lot of data loss.
I can understand how some people might gain a false sense of security after virtualizing as they enjoy the cool new things that they can do now because of virtualization. Their previously rigid physical servers gain superpowers by becoming encapsulated into VMs which provides them with mobility to zip across hosts and datastores while running. They also now have the cool HA & FT features that means they can sleep better at night and not have to head to the office at 2am to get the Exchange server back up and running after a hardware failure. But to think that their VMs are now much safer after virtualizing is just nuts. And its not just hardware failures that can cause big data loss in a virtual environment, there are a lot of other things that can do it as well. With a few simple clicks of a mouse someone can delete a datastore and a lot of VMs or change a setting and shutdown your whole environment. These are things that you don’t have that to worry about in a traditional data center.
Understand this, when you virtualize you’re not in Kansas anymore. When things happen in virtual environments it can have huge impacts. Sure you can help mitigate the risk but the fact remains, shit happens and when it does you can end up covered in it.
So the moral of this story is:
Virtualization does not make your data any safer
So if you’re one of the people that took that survey and think your data is much safer because you virtualized, you better think again, and next year be sure and stop by one of the backup vendors like Veeam or Unitrends and hopefully you’ll learn about the realities of data protection in a virtual environments. And its not just about protecting data through backup methods, you can implement features like stretched storage clusters (vMSC) to protect against storage failures and SRM to provide more traditional off-site recovery options.
So stay safe out there and enjoy all the great benefits that virtualization provides but be smart and make sure you understand the impacts that virtualization has on data protection and what you need to do to keep your data safe.
The other day I had someone ask if 3PAR was certified for vMSC with vSphere 5.5. So naturally I went to the VMware HCL and checked by selecting ESXi 5.5 for the version, then HP as the vendor and then the FC Metro Cluster Storage as the array test configuration.
When I searched I got back 5 results which made me think it was certified for vMSC in ESXi 5.5. However after checking with the product teams they said a full storage re-certification was required for ESXi 5.5 and while the arrays were certified for ESXi 5.5, separate testing still needed to be done for vMSC with ESXi 5.5. As a result despite the HCL showing results back for vMSC and ESXi 5.5, the arrays are actually not certified for vMSC with ESXi 5.5. Note this behavior is true for any partner you select, when I selected EMC and FC-SVD Metro Cluster Storage it returned one result for ESXi 5.5 just like it would if you select ESXi 5.1 U1.
After looking further at the HCL you can tell this because the FC Metro Cluster Storage entry is missing in the OS Release Details for ESXi 5.5.
If you switch the version to ESXi 5.1 Update 1 you’ll see the entry indicating it is certified for vMSC.
After checking further with VMware it appears that there was no Day 0 support for vMSC in vSphere 5.5 which means that there are no arrays which are certified for vMSC with vSphere 5.5. The reason for this is that VMware has not yet completed what is needed for vMSC re-certification testing with vSphere 5.5. They do not expect to have this until Nov-Dec at which time partners can begin the re-certification process for vSphere 5.5. As a result you probably will not see any arrays actually certified for vMSC with vSphere 5.5 until next year.
I recently wrote about the challenges around management in heterogeneous virtual environments and how tools from vendors like SolarWinds can help overcome those challenges. One of the keys to having a good management product is that it’s not creating more management silos and can cover the full spectrum of your virtual environment. SolarWinds has a lot of management products that can cover every inch of your virtual environment from applications to hypervisors to servers, storage and networking. While having that end to end coverage from a single vendor can eliminate management headaches, wouldn’t it be nice if those tools could all integrate with each other to provide even better and more unified management.
Consider the following common scenario in a virtual environment; users are reporting that an application running inside a VM is responding very slowly. Where is the first place you typically start? You look specifically at application and operating system performance monitors, OK so I see it’s performing poorly there, what next? Must be a resource problem but is it a problem with virtual or physical resources? I’ll start with looking at my virtual resources since they are closer to the VM, but virtual resources are closely tied to physical resources so I’ll need to investigate both. But my physical resources are spread across different physical hardware areas; I’ll need to look at servers, storage and networking resources all independently. This quest to find the root cause can have me jumping through many different management tools and if they don’t interact with each other I can’t follow the trail across them which can make solving the problem extremely difficult for me.
Now if you have tools that can talk to each other, you can more easily follow the bread crumbs across tools and have visibility into the whole relationship from app to bare metal. So instead of manually trying to piece the puzzle together while trying to follow the flow from an I/O generated inside a VM through all the layers to its final destination on a physical storage device, you would be able to see this at a single glance with individual tools that specialize on a specific area but work together with other tools that cover different areas. This is a big deal in a virtual environment that has a lot of moving parts that can make it easy to get lost and end up hitting dead ends and having to start over. You need a single pane of glass to look through so you’re not blindly working with tools that can’t show you the end to end view of your entire virtual environment.
The latest release of SolarWind’s Virtualization Manager does just that, it brings together two tools that focus on different areas, one inside a VM, the other outside of it. Virtualization Manager is a great management tool that is focused on providing the best possible management and monitoring for virtual environments including the hypervisor and physical resources. SolarWind’s Server and Application Monitor on the other hand specializes in the management that occurs inside a VM at the guest OS and application layers. With the latest Virtualization Manager 6.0 release SolarWinds has provided integration with Server and Application Monitor so they can share information and you get a single pane of glass that provides end to end visibility.
While that combination is a big win that makes management much easier, it gets even better. Together Virtualization Manager and Server and Application Monitor cover the management of apps, guest OSs, servers and the hypervisor, but what about my network and storage resources? In a virtual environment those are two very critical resources that are often physically separate from server resources. Virtualization Manager has that covered as well with integration with both SolarWinds Network Performance Monitor and SolarWinds Storage Manager. This integration completes the picture and ties together every single layer and resource that exists in a virtual environment. You now have the fusion of four great management tools, each with a specific focus area, into one great tool that is both broad and deep for maximum visibility into your virtual environment.
So I encourage you to check out SolarWinds Virtualization Manager and all the other great management tools that SolarWinds has to offer that makes management in any virtual environment, including heterogeneous ones, both simple and easily and more importantly, complete. To find out more about this great new Virtualization Manager integration check out the below resources which may leave you with a strong feeling of management tool envy!
- SolarWinds Virtualization Manager and Server & Application Manager Integration video
- Virtualization Manager 6.0 Federal Webcast
- Systems Management Demo showing Server and Application Monitor and Virtualization Manager integration (under the “Virtualization” tab)
- Virtualization Manager with SAM and NPM integration blog post
- Virtualization Manager and Storage Manager integration blog post
- Virtualization Manager and Server and Application Manager blog post
I’d like to welcome HyTrust as a new sponsor to vSphere-land. While they might be a new sponsor here, HyTrust and I actually go way back to the days when I was a judge for the Best of VMworld awards. Back in 2009 I was a judge for the security category for Best of VMworld, I always had a fondness for security so I really enjoyed judging that category. Back in those days virtualization security was still a relatively new area and didn’t get as much focus as it does today. I remember going through the motions for judging at VMworld, visiting each of the vendors in that category, asking questions and learning about their products.
HyTrust was actually my last stop of the day before we had to turn everything in and I was short on time when I visited them and talked with their president, Eric Chiu. But that didn’t matter as it didn’t take long for me to figure out that they had a pretty special product that really stood out from the other security products I had seen that day. Most of the other security products I had seen where doing basically the same thing just in slightly different ways. HyTrust’s approach to security on the other hand was completely different and made my decision for the winner of that category very easy. They impressed me and the other judges so much that we also chose them as Best of Show among all the other category winners.
I wrote a follow-up article on HyTrust for searchvmware.com describing their solution in more detail and also mentioned them as a solution in my article on How to Steal a VM in 3 Easy Steps. But don’t take my word for it, check them out yourself and you decide. It’s not easy being a virtualization startup and I recently read a great article on Eric Chiu and HyTrust that described all the behind the scenes financial dealings that helps fund startups in their quest to be successful. It’s always challenging as a startup and many fail but HyTrust has ridden the virtualization wave and has not faltered. Virtualization security has never been more important then it is today and there will always be a need for good solutions to secure virtual environments so be sure and give HyTrust a look. They’ve been in the news recently with the latest version of their product and also were named one of the 10 recent tech investments to watch.
Maximum Type Category vSphere 5.1 vSphere 5.5
Virtual disk size VM Storage 2TB minus 512 bytes 62TB
Virtual SATA adapters per VM VM I/O Devices NA 4
Virtual SATA devices per virtual SATA adapter VM I/O Devices NA 30
xHCI USB controllers VM I/O Devices 1 NA
Logical CPUs per host Host Compute 160 320
NUMA Nodes per host Host Compute 8 16
Virtual CPUs per host Host Compute 2048 4096
Virtual CPUs per core Host Compute 25 32
RAM per host Host Compute 2TB 4TB
Swap file size Host Compute 1TB NA
VMFS5 - Raw Device Mapping size (virtual compatibility) Host Storage 2TB minus 512 bytes 62TB
VMFS5 - File size Host Storage 2TB minus 512 bytes 62TB
e1000 1Gb Ethernet ports (Intel PCI‐x) Host Networking 32 NA
forcedeth 1Gb Ethernet ports (NVIDIA) Host Networking 2 NA
Combination of 10Gb and 1Gb Ethernet ports Host Networking Six 10Gb and Four 1Gb ports Eight 10Gb and Four 1Gb ports
mlx4_en 40GB Ethernet Ports (Mellanox) Host Networking NA 4
SR-IOV Number of virtual functions Host VMDirectPath 32 64
SR-IOV Number of 10G pNICs Host VMDirectPath 4 8
Maximum active ports per host (VDS and VSS) Host Networking 1050 1016
Port groups per standard switch Host Networking 256 512
Static/Dynamic port groups per distributed switch Host Networking NA 6500
Ports per distributed switch Host Networking NA 60000
Ephemeral port groups per vCenter Host Networking 256 1016
Distributed switches per host Host Networking NA 16
VSS portgroups per host Host Networking NA 1000
LACP - LAGs per host Host Networking NA 64
LACP - uplink ports per LAG (Team) Host Networking 4 32
Hosts per distributed switch Host Networking 500 1000
NIOC resource pools per vDS Host Networking NA 64
Link aggregation groups per vDS Host Networking 1 64
Concurrent vSphere Web Clients connections to vCenter Server vCenter Scalability NA 180
Hosts (with embedded vPostgres database) vCenter Appliance Scalability 5 100
Virtual machines (with embedded vPostgres database) vCenter Appliance Scalability 50 3000
Hosts (with Oracle database) vCenter Appliance Scalability NA 1000
Virtual machines (with Oracle database) vCenter Appliance Scalability NA 10000
Registered virtual machines vCloud Director Scalability 30000 50000
Powered-On virtual machines vCloud Director Scalability 10000 30000
vApps per organization vCloud Director Scalability 3000 5000
Hosts vCloud Director Scalability 2000 3000
vCenter Servers vCloud Director Scalability 25 20
Users vCloud Director Scalability 10000 25000
If you look at the VM resource configuration maximums over the years there have some big jumps in the sizes of the 4 main resource groups with each major vSphere release. With vSphere 5.0 the term “Monster VM” was coined as the number of virtual CPUs and the amount of memory that could be assigned to a VM took a big jump from 8 vCPUs to 32 vCPUs and from 255GB of memory to 1 TB of memory. The number of vCPUs further increased to 64 with the vSphere 5.1 release.
One resource that has stayed the same over the releases going back all the way to vSphere 3.0 (and beyond?) is the virtual disk size which has been limited to 2TB. With the release of vSphere 5.5, the maximum vCPUs and memory has stayed the same but the size of a virtual disk has finally been increased to 62TB. That increase has been long awaited as previously the only way to use bigger disks with a VM with using a RDM. So while the Monster VM may have not had it’s brains increased in this release, the amount it can pack away in its belly sure got a whole lot bigger!
vSphere 5.5 is now available for your downloading pleasure here, while you’re waiting for it to download be sure and check out my huge (and growing) vSphere 5.5 link collection to get you all the info you need to know how to use it. This is also the shortest release cycle from the previous version that VMware has done to date at 377 days from the release of vSphere 5.1. Also don’t forget the documentation, the first document I always check out is the latest Configuration Maximums document to see how much larger everything continues to grow.
Many virtual environments lack consistency and uniformity and are often made up of a diverse mix of hardware and software components. This diversity could include things like multiple vendors for hardware components as well as hypervisors from different vendors. Since budgets play a big factor in determining the hardware and software used in a virtual environment, often times there will be variety of equipment from different vendors. In other cases customers want to use best of breed hardware and software which typically means having to go to different vendors to get the configuration they desire.
For hardware it’s not uncommon to see different hardware vendors used across servers, storage and networking, and sometimes different hardware vendors within the same resource group (i.e. storage). When it comes to hypervisor software the most commonly used software is VMware vSphere but it’s not uncommon to see Microsoft’s Hyper-V inside the same data center as a majority of customers use Microsoft for their server OS which gives them access to Hyper-V. Another common mix for client virtualization is running Citrix XenDesktop on top of VMware vSphere. Whatever the reasons are for choosing different vendors to build out a virtual environment, it’s rare to find a datacenter that has the same brand hardware across servers, storage and networking and only a single hypervisor platform running on it.
As a result of this melting pot of different hardware and hypervisors, managing these heterogeneous virtual environments can become extremely difficult as management tools tend to be specific to a hardware or hypervisor platform. Having so many management tool silos can add complexity, increase administration overhead and increase costs. To help offset that we’ll cover 5 tips for managing heterogeneous virtual environments so you can do it more effectively.
1 - Group similar hardware together for maximum effectiveness
If you are going to use a mix of different brands and hardware models you can’t just throw it all together and expect it to work effectively and efficiently. Every hardware platform has its own quirks and nuances and because virtualization is very picky about physical hardware you should group similar hardware together whenever possible. When it comes to servers this is usually done at the cluster level so features like vMotion and Live Migration that move VMs from host to host can ensure CPU compatibility. Because AMD & Intel CPUs use different architectures you cannot move a running VM across hosts with different processor vendors. There are also limitations on doing this within processor families of a single CPU vendor that you need to be careful of. Some of these can be overcome using features like VMware’s Enhanced vMotion Compatibility (EVC) but it can still cause administration headaches.
Shared storage you can intermix more easily as many storage features built into the hypervisor like thin provisioning and Storage vMotion will work independently of the underlying physical storage hardware. With storage you can use different vendor arrays side by side in a virtual environment without any issues but you should be aware of the differences and limitations between file-based (NAS) and block-based (SAN) arrays. You can use file and block arrays together in a virtual environment but due to protocol and architecture differences you may run into issues with feature support and integration across file and block arrays. Therefore you should try and group similar storage arrays together as well so you can get take advantage of features and integration that may only work with within a storage array product family and within a storage protocol. By doing some planning up front and grouping similar hardware together you can increase efficiency, improve management and avoid incompatibility problems.
2 – Try and stick to one hypervisor platform for your production environment
I’ve seen surveys that say more than 65% of companies that have deployed virtualization are using multiple hypervisors. That number seems pretty high and what I question is how those companies are deploying multiple hypervisors. Someone taking a survey that has 100 vSphere hosts and 1 Hyper-V host can say they are running multiple hypervisors but the reality is they are using one primary hypervisor. When you start mixing hypervisors together for whatever reason you run into all sorts of issues that can have a big impact on your virtual environment. The biggest issues with this tend to be in these areas:
- Training – You have to make sure your staff is continually trained on 2 different hypervisor platforms
- Support – You need to have support contracts in place for 2 different hypervisor platforms
- Interoperability – There is some cross hypervisor interoperability between hypervisors using conversion tools but since they use different disk formats this can be cumbersome
- Management – Both VMware & Microsoft have tried to implement management of the other platform into their native management tools but it is very limited and not all that usable
- Costs – You are doubling your costs for training, support, management tools, etc
As a result it makes sense to use one hypervisor as your primary platform for most of your production environment. It’s OK to do one off’s and pockets here and there with a secondary hypervisor but try and limit that.
3 – Make strategic use of a secondary hypervisor platform
Sometimes it may make sense financially to intermix hypervisor platforms in your data center for specific use cases. Using alternative lower cost or free hypervisor platforms can help increase your use of virtualization while keeping costs down. If you do utilize a mix of hypervisors in your data center do it strategically to avoid complications and some of the issues I mentioned in the previous tip. Here are some suggestions for strategically using multiple hypervisor platforms together:
- Use one primary hypervisor platform for production and a secondary for your development and test environments. You may consider using the same hypervisor for production and test though so you can test for issues that may occur from running an application on a specific hypervisor before it goes into production.
- Create tiers based on application type and how critical it is. This can be further defined by the level of support that you have for each hypervisor. If you have 24×7 support on one and 9×5 support on another you’ll want to make sure you have all your critical apps running on the hypervisor platform with the best support contract.
- If you have remote offices you might consider having a primary hypervisor platform at your main site and using an alternate hypervisor at your remote sites.
These are just some of the logical ways to divide and conquer using a mix of hypervisors, you may find other ways that work better for you that gives you the benefits of both platforms without the headaches.
4 – Understand the differences and limitations of each platform
No matter if you’re using different hardware or hypervisor platforms you need to know what the capabilities and limitations of each platform which can impact availability, performance and interoperability. Each hypervisor platform tends to have its own disk format so you cannot easily move VMs across hypervisor platforms if needed. When it comes to features there are usually some requirements around using them and despite being similar they tend to be proprietary to each hypervisor platform.
When it comes to hardware CPU compatibility amongst hosts is a big one because moving a running VM from one host to another using vMotion or Live Migration requires both hosts to have CPUs that are the same manufacturer (i.e. Intel or AMD) as well as architecture (CPU family). With hypervisors there are some features unique to particular hypervisor platforms like the power saving features built into vSphere. Knowing the capabilities and limitations of the hardware and hypervisors that you use can help you strategically plan how to use them together more efficiently and help avoid compatibility problems.
5 – Leverage tools that can manage your environment as a whole
Managing heterogeneous environments can often be quite challenging as you have to switch between many different management tools that are specific to hardware, applications or a hypervisor platform. This can greatly increase administration overhead and decrease efficiency as well as limit the effectiveness of monitoring and reporting. When you have management silos in a data center you lack the visibility across the environment as a whole which can create unique challenges as virtual environments demand unified management. Compounding the problem is the fact that native hypervisor tools are only designed to manage a specific hypervisor so you need separate management tools for each platform. Some of the hypervisor vendors have tried to extend management to other hypervisor platforms but they are often very limited and more designed to help you migrate from a competing platform. The end result is a management mess that can cause big headaches and fuels the need for management tools that can operate at a higher level and that can bridge the gap between management silos.
SolarWinds delivers management tools that can stretch from apps to bare metal and can cover every area of your virtual environment. This provides you with one management tool that can manage multiple hypervisor platforms and also provide you with end to end visibility from the apps running in VM’s to the physical hardware that they reside on. Tools like SolarWinds Virtualization Manager deliver integrated VMware and Microsoft Hyper-V capacity planning, performance monitoring, VM sprawl control, configuration management, and chargeback automation; all in one affordable product that’s easy to download, deploy, and use. Take it even further by adding on with SolarWinds Storage Manager and Server and Application Monitor and you have a complete management solution from a single vendor that covers all your bases and creates a melting pot for your management tools to come together under a unified framework.