I have been intending to write about this set of technologies that is challenging the conventional model of computing for some time now. They are combination of Software Defined Data Center (SDDC), Public / Hybrid Cloud, Infrastructure As Code (IAC), Containers, Orchestration and Composable Infrastructure. Most of these technologies are disruptive. Organizations that do not embrace them face a real threat from business perspective since many in the competition may already be ahead of the curve and would be reaping the benefits.
As it goes in Formula One racing - If you have are not innovating and developing your car throughout the racing season, you are falling behind. This happens since every other competitor is innovating, testing and bringing new components to every race. Lack of development can actually be seen at the end of a season where team/s that begin the year strong might not be the ones who end it strong. The same analogy can be applied to IT. Lack of embracing these technologies the right way can have significant business impact.
I intend to provide clarity and simplify these technologies and talk about use case scenarios through "IT 2.0" series of posts. They will include strategy, architecture, implementation and operations perspective.
The first post is on Containers. What are containers and why should we care?
What are containers
Looking at virtual machines would be a good start to get to containers. virtual machines (VMs) can be represented as shown in the figure below where you would have one or more physical hosts running the virtualization Platform. Hypervisor kernel manages the platform and resources. Each application needs to run on a discreet operating system or a virtual machine. This model's primary focus is optimal utilization of all hardware resources.
A Container is stand-alone, executable package of software that includes everything needed to run it: binaries, libraries, application code and other settings. They can run on a bare metal machine (shown in illustration below), on a virtual machine or in cloud. Instead of one application requiring a full operating system and one unique virtual machine, multiple lightweight containers can be executed on a single VM / host.
Why Containers
Container architecture addresses many issues with the conventional virtual machines based application delivery and execution model and gives the development teams flexibility, reliability and agility.
Reliability: Same piece of code can run in development, QA, test, staging or production environment. The binary is packaged with all the dependencies. Because of this abstraction, version mismatch of underlying software between QA and Production breaking applications is very much a non-issue.
Portability: A developer can develop the code on their laptop and execute it in the on-prem, virtualized or cloud instance of the same container engine without having to make any code modifications. This enhances productivity and promotes flexibility by giving freedom to the development community.
Building block for DevOps: DevOps utilizes philosophies, practices, and tools that increases an organization’s ability to deliver applications/software at a rapid pace. Contrasting with the conventional way of application development the focus is on "agile" fashion. Containers - by providing ability to manage smaller code, portability, rapid deployment and execution becomes one of the foundations in an organization's journey to DevOps.
Rapid Deployment and Execution: Because of their size and architecture, containers can be deployed, started and restarted almost instantaneously. Moreover their restart does not require restart of a complete Operating System, giving a much quicker restart capabilities.
Promotes Cloud Native Functionality: One of main value of cloud computing comes from applications that are developed with its key tenets: loosely coupled, modular and horizontally scalable. Containerization promotes modularity by creating unique smaller containers for your various tiers of application rather than one larger container. It also promotes quicker scaling using automation. Combined with cloud computing's ability to have presence in various regions across the globe, these attributes bring the true value of cloud to the community.
How to Build and Use Containers
We will focus on the most popular Container platform - Docker in the next post. I will go deeper into Docker architecture and demonstrate its implementation with simple use cases.
IT Infrastructure Ramblings
DC Migration, EMC, NetApp Storage, VMware vSphere; vRealize; Site Recovery Manager; Horizon View; IaaS; DaaS; Thin and Zero Clients; Server Infrastructure; PCoIP, Blast
Thursday, February 22, 2018
Tuesday, May 10, 2016
VMware OS Optimization Tool - Why is it must have for all VDI Admins?
VMware OS Optimization Tool - Why is it must have for all VDI Admins?
VMware Flings recently announced a utility that should/must be in the toolkit of every one involved with managing a virtual desktop infrastructure. It is called VMware OS Optimizer and helps optimize Windows 7/8/2008/2012/10 systems for use with VMware Horizon View. The optimization tool includes customizable templates to enable or disable Windows system services and features, per VMware recommendations and best practices. Since most Windows system services are enabled by default, the optimization tool can be used to easily disable unnecessary services and features to improve performance.
The Why
The focus of the tool is ensuring that the desktop base image is as optimized as possible to stop most services VMware thought as optional or causing unnecessary resource utilization on the desktop. Anyone who has managed a VDI back-end environment will attest to the resource optimization per VDI VM and the net effect on the overall consumption at the host and cluster level. It ultimately will decide your consolidation ratio or the amount of virtual desktop images you can run per physical server and in turn your ROI.
If this was not sufficient LoginVSI (One of the leading and reputed provider of VDI performance testing software) recently performed testing on optimized vs non optimized versions and their results show a 44% increase in VSImax score for the optimized Windows 10 desktops.
The How
Running the tool is fairly straightforward.
1. Download the tool
2. Extract the compressed file and run the executable on the desktop or server operating system that you want to optimize
3. Run the tool in Analyze mode and identify the areas that it can potentially optimize as shown in the screenshot below

4. Based on your (and your end users') needs adjust the settings as necessary since ratcheting all the settings down might not be desired for your environment
5. Click on Optimize to optimize the guest OS image. Then the tool gets to work and makes the modifications for the suggestions of analysis
6. Upon completion you should observe the analysis summary window on top and appreciate the reduced overhead!
Final Note
The utility of OS Optimization Tool will not be just limited to VMware deployments. I own a 3 year old Dell notebook that was upgraded to Windows 10 last year. It was constantly huffing and puffing with near constant disk activity LED flashing and not the least the performance was degraded. Post applying the optimizations it definitely is running much more smoother. So whether you are a VMware shop or affiliated with Citrix or Microsoft in virtual or physical desktop world, this tool can automate and enhance your user experience.
VMware Flings recently announced a utility that should/must be in the toolkit of every one involved with managing a virtual desktop infrastructure. It is called VMware OS Optimizer and helps optimize Windows 7/8/2008/2012/10 systems for use with VMware Horizon View. The optimization tool includes customizable templates to enable or disable Windows system services and features, per VMware recommendations and best practices. Since most Windows system services are enabled by default, the optimization tool can be used to easily disable unnecessary services and features to improve performance.
The Why
The focus of the tool is ensuring that the desktop base image is as optimized as possible to stop most services VMware thought as optional or causing unnecessary resource utilization on the desktop. Anyone who has managed a VDI back-end environment will attest to the resource optimization per VDI VM and the net effect on the overall consumption at the host and cluster level. It ultimately will decide your consolidation ratio or the amount of virtual desktop images you can run per physical server and in turn your ROI.
If this was not sufficient LoginVSI (One of the leading and reputed provider of VDI performance testing software) recently performed testing on optimized vs non optimized versions and their results show a 44% increase in VSImax score for the optimized Windows 10 desktops.
The How
Running the tool is fairly straightforward.
1. Download the tool
2. Extract the compressed file and run the executable on the desktop or server operating system that you want to optimize
3. Run the tool in Analyze mode and identify the areas that it can potentially optimize as shown in the screenshot below

4. Based on your (and your end users') needs adjust the settings as necessary since ratcheting all the settings down might not be desired for your environment
5. Click on Optimize to optimize the guest OS image. Then the tool gets to work and makes the modifications for the suggestions of analysis
6. Upon completion you should observe the analysis summary window on top and appreciate the reduced overhead!
Final Note
The utility of OS Optimization Tool will not be just limited to VMware deployments. I own a 3 year old Dell notebook that was upgraded to Windows 10 last year. It was constantly huffing and puffing with near constant disk activity LED flashing and not the least the performance was degraded. Post applying the optimizations it definitely is running much more smoother. So whether you are a VMware shop or affiliated with Citrix or Microsoft in virtual or physical desktop world, this tool can automate and enhance your user experience.
Wednesday, April 23, 2014
Sockets vs. Cores - VM Configuration
As a VMware Engineer
you would have wondered about this during some point in your career. When you
create a Virtual Machine, how to determine if you want to give 4 vCPUs to a VM
by 2 Sockets and 2 Cores or 1 Socket and 4 Cores. There were conflicting information
out there and the decision making for sockets vs. cores was always a challenge
for VMs requiring more compute.
There has been a
definitive guidance from VMware vSphere team. Which lays out the two simple
best practices.
- At the time of creation of a VM, vSphere will create as many virtual sockets as requested vCPUs and the cores per socket is equal to one. This will enable vNUMA to select and present the best virtual NUMA topology to the guest operating system. This makes the configuration "wide" and "flat".
- When you need to change the cores per socket, ensure that you mirror physical server's NUMA topology. Since the default cores per socket is changed the configuration is no longer "wide" and "fat" hence vNUMA will not automatically pick the best NUMA configuration based on the physical server. But it will use the changed configuration that has potential for topology mismatch.
The details are per recent blog post by vSphere team here.
Some terminology
description -
NUMA:
NUMA systems are advanced server platforms with more than one system
bus. A multi GHz processor needs a large memory bandwidth to use its power
effectively. The problem becomes more obvious on Symmetric Multiprocessing
Systems where many processors are competing for memory bandwidth.
NUMA links small
nodes using a high performance connection. Each node contains processors and
memory, however a memory controller allows the node to use memory on all other
nodes. When a processor accesses memory that is not on its own node, it
traverses over the NUMA connection - resulting in lower access speed compared
to local memory.
NUMA Scheduling in ESXi
NUMA scheduler on
ESXi dynamically balances processor load and memory locality. The algorithm
works as described below.
1
|
Each virtual
machine managed by the NUMA scheduler is assigned a home node. A home node is
one of the system’s NUMA nodes containing processors and local memory, as
indicated by the System Resource Allocation Table (SRAT).
|
2
|
When memory is
allocated to a virtual machine, the ESXi host preferentially allocates it
from the home node. The virtual CPUs of the virtual machine are constrained
to run on the home node to maximize memory locality.
|
3
|
The NUMA scheduler
can dynamically change a virtual machine's home node to respond to changes in
system load. The scheduler might migrate a virtual machine to a new home node
to reduce processor load imbalance. Because this might cause more of its
memory to be remote, the scheduler might migrate the virtual machine’s memory
dynamically to its new home node to improve memory locality.
|
In summary, A VM
administrator should choose the maximum amount of sockets available for a VM
and if the cores per socket needs to be adjusted, it should be done in accordance to the
physical server's NUMA topology. NUMA scheduling and memory placement policies
in ESXi can manage all virtual machines transparently, so that administrators
do not need to address the complexity of balancing virtual machines between
nodes explicitly.
Sunday, May 19, 2013
No SAN ... No Problem! VMware VSA @ work
VMware has announced
vSphere Storage Appliance for more than a year now. For whatever reason you
might not have looked at the product, this post is to make a point that VSA is
a great product; works well and in its latest iteration it has addressed a lot
of the shortcomings of the previous releases.
Let's take a look at
what the product is targeted at. It is a really good fit for SMB where getting
even an entry level SAN is cost prohibitive. The architecture of a VSA cluster
includes the physical servers that have local hard disks, ESXi as the operating
system of the physical servers, and the vSphere Storage Appliance virtual
machines that run clustering services to create volumes that are exported as
the VSA datastores via NFS.
Benefits:
- No SAN needed
- Simple to implement
- Low Cost
- Licensed with vSphere Essentials Plus Kit
- Can add disks to expand the capacity of cluster (ver. 5 feature)
- Can be deployed in brownfield or greenfield implementation (brownfield ver. 5 feature)
Shortcomings:
- Comes in a two node and three node (hosts) version only
- Once implemented you cannot add a node (host) to add compute at future date
- Needs a physical box for vCenter to hold the quorum in a two node deployment
There is a very good
evaluation
guide written by Cormac Hogan. Rawlinson Rivera has done some commendable
work over here
explaining details surrounding brownfield deployments. This would be very
helpful for people who already have existing VMs in the environment on
individual hosts.
In summary, you get
all the enterprise level features (vMotion and High Availability) at a
relatively low cost. Moreover no SAN and SAN management skills needed.
Sunday, May 12, 2013
Network Performance Throughput Testing in Virtualized or Physical Environments using iperf
Dropped packets
should not occur on any network because they typically indicate congestion in the
network or an issue possibly with hardware. One percent dropped packets in either direction
can significantly throttle overall throughput.
Dropped packets in
vSphere can be monitored by selecting the ESXi host and clicking the
Performance tab. Select Advanced > Network > Real Time, and select None
in Counters. Select Receive packets dropped and Transmit packets dropped.
Iperf is a network
testing tool that can create a TCP or UDP data stream between two virtual or physical nodes. Iperf is open source software available for both Linux and
Windows at http://sourceforge.net/projects/iperf/.
To test the maximum
throughput of the network interfaces for 200 seconds:
On the server node,
execute the following command:
$ iperf -s -i 5
On the client node,
execute the following command:
$ iperf -c
<server-name> -t 200 -i 5 –m
During one of my
future posts I will be posting the demo of the utility in action in a VMware
ESXi environment.
Saturday, May 11, 2013
vCenter Single Sign-On failure after changing SQL Server port
Guess what? After the change, the single sign-on service would refuse to start. Upon research, I was taken to a VMware KB 2033516. The explanation and some screenshots based on the KB are depicted below.
Step 1. Modify the SSO Setting.
Step 2. Make the port modification in the config file.
Step 3. That's it. Restart SSO and vCenter Service and you are back in business.
Please note that the password values and hostnames are cleared from the config file screenshots.
Monday, January 21, 2013
VMware ESX Management Services will not restart
I have run into a few occassions with esxi 4 and esx 4 where I wanted to restart the management services and they will not come back. There would be a stuck process (service) causing it. The result is the host and all the VMs running on the host are disconnected from the cluster - though the virtual machines should continue to run without issues.
However to get that host back into the cluster without rebooting the host you can follow the following steps.
However to get that host back into the cluster without rebooting the host you can follow the following steps.
- Log onto the affected host from a KVM, physical console or iLO/DRAC remote console and browse to the following directory in the file system by running ...
- #cd /var/lun/vmware
- Get the Process ID (pid) for the management service by checking the content of the following files and kill the management service
- #cat vmware-host.pid
- note the pid
- use the pid to kill the process by using #kill -9 <pid value>
- Delete the files vmware-host.pid and watchdog-host.pid by using
- #rm vmware-host.pid
- #rm watchdog-host.pid
- Start the management service by
- #service mgmt-vmware start
Subscribe to:
Comments (Atom)



