VMware Ansible – Getting Started Examples

Ansible DevOps VMware

Infrastructure as Code (IaC) is managing infrastructure using a DevOps type methodology. Where the current configuration of your infrastructure is represented in code, any additions, changes or deletions are also made through this code. There are various methods to codify your infrastructure, some proprietary like AWS Cloudformation, others are generic like Ansible or Terraform.

The introduction below just gives some basic examples of how you can use Ansible with the VMware Community module to manage your infrastructure. It is recommended to deploy a test vCenter and test ESXi host and play around, rather than going straight to your production machines. Also be aware that Ansible should be idempotent, there are however certain aspects of configuration where there is a need to make changes in an order, which when re-run can cause actions to be taken again, although there are ways to resolve these within Ansible, something you should test and be aware of.

Preparing the Environment

I’m assuming you already have grounding in Ansible, but in this case we’ll setup Ansible add the various dependencies and then add the VMware Community modules on your Developer machine.

$ python -m venv venv
$ source venv/bin/activate
$ pip install --upgrade pip

Install Ansible as per: https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html

$ ansible-galaxy collection install community.vmware
$ pip install requests
$ pip install pyVim
$ pip install pyvmomi

You may also want to install the vSphere Automation SDK, this can be helpful and may be required by some modules.

$ pip install --upgrade git+https://github.com/vmware/vsphere-automation-sdk-python.git

Full documentation of the VMware Community module can be found https://docs.ansible.com/ansible/latest/collections/community/vmware/index.html

Foreword

You can structure your Ansible however you wish, there are various ways to build the structure and add roles etc. In this example, we’ll keep it very simple essentially the structure will consist of:

  • vmware.yml
  • dc_dc1.yml
  • cluster_cluster1.yml
  • dvswitch_dvswitch1.yml
  • host-host1.yml

It will also be simple in that the credentials will be requested from the user at the point of running the Ansible playbook, there are various methods to manage and store credentials that are beyond the scope of this document, in this case the credentials are stored in “facts” to allow them to be reused across the plays.

The other thing to bear in mind about the VMware Community module is that it runs as “localhost”, i.e. you’re not connecting to the ESXi host to configure it, its essentially a wrapper around the VMware API.

The example is also fairly rough, there are many ways to ensure that the Ansible play is as reusable as possible, by use of variables so there is more that can be done to streamline it.

Main File (vmware.yml)

# Site.yml all playbooks included here

  - hosts: localhost
    become: false
    pre_tasks:
      - name: Check Ansible Version
        assert:
          that:
            - ansible_version.major == 2
            - ( ansible_version.minor == 7 or ansible_version.minor == 9)
          msg: "Our Ansible requires Ansible 2.7 or 2.9"
    vars_prompt:
      - name: "vcenter_hostname"
        prompt: "vCenter FQDN"
        private: no
        default: "vcenter1.domain.com"
      - name: "vcenter_username"
        prompt: "vCenter Username"
        private: no
        default: "administrator@vsphere.local"
      - name: "vcenter_password"
        prompt: "vCenter Password"
        default: "<password>"
        private: yes
      - name: "esxi_password"
        prompt: "ESXi Password"
        default: "<password>"
        private: yes
    tasks:
      - name: "Set captured variables"
        set_fact:
          vcenter_hostname: "{{ vcenter_hostname }}"
          vcenter_username: "{{ vcenter_username }}"
          vcenter_password: "{{ vcenter_password }}"
          esxi_password: "{{ esxi_password }}"

# Datacenter Configuration
   - import_playbook: dc_dc1.yml

# # Cluster Configuration
   - import_playbook: cluster_cluster1.yml


# # Distributed Switch Configuration
  - import_playbook: dvswitch_dvswitch1.yml 

# Host Configuration
  - import_playbook: host_host1.yml

Datacentre (dc_dc1.yml)

---
  - name: "DC1 Configuration"
    hosts: localhost
    gather_facts: false

    # Configuration is per the default unless explicitly defined.

    vars:
      - datacenter_name: "DC1"
      - validate_certs: "false"
  
    tasks:    
      - name: Create Datacenter
        community.vmware.vmware_datacenter:
          hostname: '{{ vcenter_hostname }}'
          username: '{{ vcenter_username }}'
          password: '{{ vcenter_password }}'
          datacenter_name: '{{ datacenter_name }}'
          validate_certs: '{{ validate_certs }}'
          # Specific Configuration
          state: present
        delegate_to: localhost

Cluster 1 (cluster_cluster1.yml)

---
  - name: "Cluster1 Configuration"
    hosts: localhost
    gather_facts: false

    # Configuration is per the default unless explicitly defined.

    vars:
      - datacenter_name: "DC1"
      - cluster_name: "Cluster1"
      - validate_certs: "false"

    tasks:
        - name: Create Cluster
          community.vmware.vmware_cluster:
            hostname: '{{ vcenter_hostname }}'
            username: '{{ vcenter_username }}'
            password: '{{ vcenter_password }}'
            validate_certs: '{{ validate_certs }}'
            # Specific Configuration
            datacenter_name: '{{ datacenter_name }}'
            cluster_name: '{{ cluster_name }}'
          delegate_to: localhost

        - name: "DRS Configuration"
          community.vmware.vmware_cluster_drs:
            hostname: '{{ vcenter_hostname }}'
            username: '{{ vcenter_username }}'
            password: '{{ vcenter_password }}'
            datacenter_name: '{{ datacenter_name }}'
            cluster_name: '{{ cluster_name }}'
            validate_certs: '{{ validate_certs }}'
            # Specific Configuration
            drs_default_vm_behavior: "fullyAutomated"
            enable: true
            drs_vmotion_rate: 3
            predictive_drs: false
          delegate_to: localhost
          register: drs_msg

        - name: "HA Configuration"
          community.vmware.vmware_cluster_ha:
            hostname: '{{ vcenter_hostname }}'
            username: '{{ vcenter_username }}'
            password: '{{ vcenter_password }}'
            datacenter_name: '{{ datacenter_name }}'
            cluster_name: '{{ cluster_name }}'
            validate_certs: '{{ validate_certs }}'
            # Specific Configuration
            enable: true
          delegate_to: localhost
          register: ha_msg

        - name: "EVC Mode Configuration"
          community.vmware.vmware_evc_mode:
            hostname: '{{ vcenter_hostname }}'
            username: '{{ vcenter_username }}'
            password: '{{ vcenter_password }}'
            datacenter_name: '{{ datacenter_name }}'
            cluster_name: '{{ cluster_name }}'
            validate_certs: '{{ validate_certs }}'
            # Specific Configuration
            evc_mode: "intel-sandybridge"
            state: present
          delegate_to: localhost
          register: evc_msg

        # Debug
        # - debug:
        #     var: drs_msg
        # - debug:
        #     var: ha_msg
        # - debug:
        #     var: evc_msg.msg

Distributed vSwitch (dvswitch_dvswitch1.yml)

---
  - name: "dvSwitch1 Configuration"
    hosts: localhost
    gather_facts: false

    # Configuration is per the default unless explicitly defined.

    vars:
      - datacenter_name: "DC1"
      - validate_certs: "false"
    
    tasks:
      - name: Create dvSwitch
        community.vmware.vmware_dvswitch:
          hostname: '{{ vcenter_hostname }}'
          username: '{{ vcenter_username }}'
          password: '{{ vcenter_password }}'
          datacenter: '{{ datacenter_name }}'
          validate_certs: '{{ validate_certs }}'
          # Specific Configuration
          switch: dvSwitch1
          version: 7.0.3
          mtu: 1500
          uplink_quantity: 2
          uplink_prefix: 'Uplink'
          discovery_protocol: lldp
          discovery_operation: both
          state: present
        delegate_to: localhost
      
      - name: Create Management Network Port Group
        community.vmware.vmware_dvs_portgroup:
          hostname: '{{ vcenter_hostname }}'
          username: '{{ vcenter_username }}'
          password: '{{ vcenter_password }}'
          validate_certs: '{{ validate_certs }}'
          # Specific Configuration
          portgroup_name: "Management Network"
          switch_name: "dvSwitch1"
          vlan_id: 0
          num_ports: 120
          port_binding: static
          state: present
        delegate_to: localhost
      
      - name: Create vmotion Port Group
        community.vmware.vmware_dvs_portgroup:
          hostname: '{{ vcenter_hostname }}'
          username: '{{ vcenter_username }}'
          password: '{{ vcenter_password }}'
          validate_certs: '{{ validate_certs }}'
          # Specific Configuration
          portgroup_name: "vmotion"
          switch_name: "dvSwitch1"
          vlan_id: 3806
          num_ports: 120
          port_binding: static
          state: present
        delegate_to: localhost

ESXi Host (host_host1.yml)

The VMware ESXi Host configuration is substantial, the example below assumes you have an ESXi host that has been built, but is currently standalone (i.e. not joined to vCenter), it is also has basic network configuration in place, so it can be reached across the network.

The play below, will join the ESXi host to the vCenter, place it in Cluster 1, perform some basic configuration such as DNS and NTP, then add the host to the Distributed vSwitch, prepare the VMKernel ports, add two NICs for iSCSI, setup a IQN and add an iSCSI target.

Essentially, a whole configuration of a VMware ESXi host, so you can then put it into production. In my case I needed the host to connect to a HPE Nimble Storage array, and this has some recommended iSCSI timeout tweaks, although these can be applied via VMware Host Profiles, in this example, i’m showing you how you could punch the settings into the host via ESXCLI. Why? you might ask, this is because the VMware Community Ansible module does not yet have functionality to do this within the Ansible itself.

---
  - name: "host1 Host Configuration"
    hosts: localhost
    gather_facts: true

    vars:
      - datacenter_name: "DC1"
      - cluster_name: "Cluster1"
      - validate_certs: "false"
      - host_name: "host1"
      - esxi_hostname: "host1.domain.com"

    tasks:
      - name: "Join ESXi Host to vCenter"
        community.vmware.vmware_host:
          hostname: '{{ vcenter_hostname }}'
          username: '{{ vcenter_username }}'
          password: '{{ vcenter_password }}'
          datacenter: "{{ datacenter_name }}"
          cluster: "{{ cluster_name }}"
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          esxi_hostname: "{{ esxi_hostname }}"
          esxi_username: "root"
          esxi_password: "{{ esxi_password }}"
          state: present
        delegate_to: localhost
      
      - name: "DNS Configuration"
        community.vmware.vmware_host_dns:
          hostname: '{{ vcenter_hostname }}'
          username: '{{ vcenter_username }}'
          password: '{{ vcenter_password }}'
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          esxi_hostname: "{{ esxi_hostname }}"
          host_name: "{{ host_name }}"
          type: static
          domain: domain.com
          dns_servers:
            - 192.168.100.10
            - 192.168.100.11
          search_domains:
            - domain.com
            - domain2.com
        delegate_to: localhost
      
      - name: "NTP Configuration"
        community.vmware.vmware_host_ntp:
          hostname: '{{ vcenter_hostname }}'
          username: '{{ vcenter_username }}'
          password: '{{ vcenter_password }}'
          validate_certs: "{{ validate_certs }}"
          esxi_hostname: "{{ esxi_hostname }}"
          # Specific Configuration
          state: present
          ntp_servers:
              - ntp0.domain.com
              - ntp1.domain.com
        delegate_to: localhost
      
      - name: Start ntpd setting for an ESXi Host
        community.vmware.vmware_host_service_manager:
          hostname: '{{ vcenter_hostname }}'
          username: '{{ vcenter_username }}'
          password: '{{ vcenter_password }}'
          esxi_hostname: '{{ esxi_hostname }}'
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          service_name: ntpd
          state: present
        delegate_to: localhost

      # Apply VMware Licence Key
      #
      - name: Add ESXi license and assign to the ESXi host
        community.vmware.vcenter_license:
          hostname: '{{ vcenter_hostname }}'
          username: '{{ vcenter_username }}'
          password: '{{ vcenter_password }}'
          esxi_hostname: '{{ esxi_hostname }}'
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          license: <KEY HERE>
          state: present
        delegate_to: localhost

      # Network Configuration
      # An ordered workflow of stanzas. Assumes the host's management VMKernel is attached to a VSS with two uplinks.
      # Ordered workflow detaches one vmnic1 and attaches it to the DVS, then migrates the VMkernel port from the VSS to DVS,
      # then removes the VSS, and finally moves the other pNIC.
      # It is not indempotent, because running the playbook for a second time removes a vmnic from the DVS and readds it.

      - name: Add Host to dvSwitch - Step 1 (Add vmnic1)
        community.vmware.vmware_dvs_host:
          hostname: '{{ vcenter_hostname }}'
          username: '{{ vcenter_username }}'
          password: '{{ vcenter_password }}'
          esxi_hostname: '{{ esxi_hostname }}'
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          switch_name: dvSwitch1
          vmnics:
              - vmnic1
          state: present
        delegate_to: localhost

      - name: Add Host to dvSwitch - Step 2 (Migrate vmkernel)
        community.vmware.vmware_vmkernel:
            hostname: '{{ vcenter_hostname }}'
            username: '{{ vcenter_username }}'
            password: '{{ vcenter_password }}'
            esxi_hostname: '{{ esxi_hostname }}'
            validate_certs: "{{ validate_certs }}"
            # Specific Configuration
            dvswitch_name: "dvSwitch1"
            portgroup_name: "Management Network"
            network:
              type: 'static'
              ip_address: 192.168.0.27
              subnet_mask: 255.255.255.0
            state: present
            enable_mgmt: true
        delegate_to: localhost
      
      - name: Add Host to dvSwitch - Step 3 (Remove Standard Switch)
        community.vmware.vmware_vswitch:
          hostname: '{{ esxi_hostname }}'
          username: "root"
          password: '{{ esxi_password }}'
          esxi_hostname: '{{ esxi_hostname }}'
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          switch_name: "vSwitch0"
          state: absent
        delegate_to: localhost
      
      - name: Add Host to dvSwitch - Step 4 (Add vmnic0)
        community.vmware.vmware_dvs_host:
          hostname: '{{ vcenter_hostname }}'
          username: '{{ vcenter_username }}'
          password: '{{ vcenter_password }}'
          esxi_hostname: '{{ esxi_hostname }}'
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          switch_name: dvSwitch1
          vmnics:
              - vmnic0
              - vmnic1
          state: present
        delegate_to: localhost

      - name: Update the TCP/IP stack configuration of the default
        community.vmware.vmware_host_tcpip_stacks:
          hostname: "{{ vcenter_hostname }}"
          username: "{{ vcenter_username }}"
          password: "{{ vcenter_password }}"
          esxi_hostname: "{{ esxi_hostname }}"
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          default:
            hostname: "{{ host_name }}"
            domain: domain.com
            preferred_dns: 192.168.100.10
            alternate_dns: 192.168.100.11
            search_domains:
              - domain.com
              - domain2.com
            gateway: 192.168.0.1
            #congestion_algorithm: cubic
            #max_num_connections: 12000
          
      - name: Add vMotion vmkernel port with vMotion TCP/IP stack
        community.vmware.vmware_vmkernel:
            hostname: '{{ vcenter_hostname }}'
            username: '{{ vcenter_username }}'
            password: '{{ vcenter_password }}'
            esxi_hostname: '{{ esxi_hostname }}'
            validate_certs: "{{ validate_certs }}"
            # Specific Configuration
            dvswitch_name: "dvSwitch1"
            portgroup_name: "vmotion"
            network:
              type: 'static'
              ip_address: 192.168.1.27
              subnet_mask: 255.255.255.0
              tcpip_stack: vmotion
            state: present
        delegate_to: localhost

      - name: Update the TCP/IP stack configuration of the vmotion TCP/IP Stack
        community.vmware.vmware_host_tcpip_stacks:
          hostname: "{{ vcenter_hostname }}"
          username: "{{ vcenter_username }}"
          password: "{{ vcenter_password }}"
          esxi_hostname: "{{ esxi_hostname }}"
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          vmotion:
            gateway: 192.168.1.1

      # ISCSI Configuration

      - name: Add iSCSI vSwitch 1
        community.vmware.vmware_vswitch:
          hostname: '{{ esxi_hostname }}'
          username: "root"
          password: '{{ esxi_password }}'
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          switch: "vSwitch1"
          nics:
          - vmnic2
        delegate_to: localhost

      - name: Add iSCSI 1 Port Group
        community.vmware.vmware_portgroup:
          hostname: "{{ esxi_hostname }}"
          username: "root"
          password: "{{ esxi_password }}"
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          hosts: ["{{ esxi_hostname }}"]
          switch: "vSwitch1"
          portgroup: "iSCSI 1"
          vlan_id: 0
        delegate_to: localhost

      - name: Add iSCSI 1 VMKernel Port
        community.vmware.vmware_vmkernel:
            hostname: '{{ esxi_hostname }}'
            username: "root"
            password: '{{ esxi_password }}'
            esxi_hostname: '{{ esxi_hostname }}'
            validate_certs: "{{ validate_certs }}"
            # Specific Configuration
            vswitch_name: "vSwitch1"
            portgroup_name: "iSCSI 1"
            network:
              type: 'static'
              ip_address: 192.168.10.27
              subnet_mask: 255.255.255.0
              tcpip_stack: default
            state: present
        delegate_to: localhost

      - name: Add iSCSI vSwitch 2
        community.vmware.vmware_vswitch:
          hostname: '{{ esxi_hostname }}'
          username: "root"
          password: '{{ esxi_password }}'
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          switch: "vSwitch2"
          nics:
          - vmnic3
        delegate_to: localhost

      - name: Add iSCSI 2 Port Group
        community.vmware.vmware_portgroup:
          hostname: "{{ esxi_hostname }}"
          username: "root"
          password: "{{ esxi_password }}"
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          hosts: ["{{ esxi_hostname }}"]
          switch: "vSwitch2"
          portgroup: "iSCSI 2"
          vlan_id: 0
        delegate_to: localhost

      - name: Add iSCSI 2 VMKernel Port
        community.vmware.vmware_vmkernel:
            hostname: '{{ esxi_hostname }}'
            username: "root"
            password: '{{ esxi_password }}'
            esxi_hostname: '{{ esxi_hostname }}'
            validate_certs: "{{ validate_certs }}"
            # Specific Configuration
            vswitch_name: "vSwitch2"
            portgroup_name: "iSCSI 2"
            network:
              type: 'static'
              ip_address: 192.168.10.227
              subnet_mask: 255.255.255.0
              tcpip_stack: default
            state: present
        delegate_to: localhost

      - name: Enable iSCSI of ESXi
        community.vmware.vmware_host_iscsi:
          hostname: "{{ vcenter_hostname }}"
          username: "{{ vcenter_username }}"
          password: "{{ vcenter_password }}"
          esxi_hostname: "{{ esxi_hostname }}"
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          state: enabled
        delegate_to: localhost
      
      - name: Add VMKernels to iSCSI config of ESXi
        community.vmware.vmware_host_iscsi:
          hostname: "{{ vcenter_hostname }}"
          username: "{{ vcenter_username }}"
          password: "{{ vcenter_password }}"
          esxi_hostname: "{{ esxi_hostname }}"
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          iscsi_config:
            vmhba_name: "vmhba65"
            port_bind:
              - vmk2
              - vmk3
          state: present
        delegate_to: localhost

      - name: Add a dynamic target to iSCSI config of ESXi
        community.vmware.vmware_host_iscsi:
          hostname: "{{ vcenter_hostname }}"
          username: "{{ vcenter_username }}"
          password: "{{ vcenter_password }}"
          esxi_hostname: "{{ esxi_hostname }}"
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          iscsi_config:
            vmhba_name: vmhba65
            send_target:
              address: 192.168.10.50
          state: present

      # Configure Local Storage
      # Requires the "vSphere Automation SDK for Python" to be installed.
      - name: Rename a datastore
        community.vmware.vmware_object_rename:
          hostname: '{{ vcenter_hostname }}'
          username: '{{ vcenter_username }}'
          password: '{{ vcenter_password }}'
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          new_name: "host1:datastore1"
          object_name: datastore1
          object_type: Datastore
        delegate_to: localhost

      # iSCSI Advanced Configuration
      # Ansible modules don't exist for this (yet), so we have to perform via SSH.

      - name: Start ntpd setting for an ESXi Host
        community.vmware.vmware_host_service_manager:
          hostname: '{{ vcenter_hostname }}'
          username: '{{ vcenter_username }}'
          password: '{{ vcenter_password }}'
          esxi_hostname: '{{ esxi_hostname }}'
          validate_certs: "{{ validate_certs }}"
          # Specific Configuration
          service_name: ntpd
          state: present
        delegate_to: localhost

      

      - name: "Advanced Configuration - iSCSI LoginTimeout"
        shell: esxcli iscsi adapter param set -A vmhba65 -k "LoginTimeout" -v "30"
        register: result
        delegate_to: "{{ esxi_hostname }}"
      
      - name: "Advanced Configuration - iSCSI NoopOutInterval"
        shell: esxcli iscsi adapter param set -A vmhba65 -k "NoopOutInterval" -v "30"
        register: result
        delegate_to: "{{ esxi_hostname }}"

      - name: "Advanced Configuration - iSCSI NoopOutTimeout"
        shell: esxcli iscsi adapter param set -A vmhba65 -k "NoopOutTimeout" -v "30"
        register: result
        delegate_to: "{{ esxi_hostname }}"

Running the Playbook

You may also want to create a hosts file e.g.

[esxiservers]
host1.internal.sanger.ac.uk

To run the play you can use:

ansible-playbook vmware.yml -i hosts -u root -k

You are then prompted for the vCenter hostname, the vCenter Username, vCenter Password and finally the SSH password for the host. Once collected the playbook configures the host as per the instructions.

Conclusion

Hopefully this has been a useful albeit brief introduction into Ansible for VMware. As you can see it is very adaptable and can be used to manage the Virtual Machines as well (although beyond the scope of this article).

Image Attribution

Leave a Reply

Your email address will not be published. Required fields are marked *