...and this one too:
Now this was not something which I had seen before and threw me for a while before finally figuring it out.
The vApp is assigned it's IP settings from an IP Pool associated with the datacenter, in this case both vms receive IP, Netmask, Gateway and DNS settings from this pool. When checking this in more detail I found that the network which was associated with this IP Pool was incorrect.
What had happened was that we had migrated the vApps network from a standard vSwitch for a Distributed vSwitch a few months ago. The port groups used in the old standard vSwitch was named slightly differently than the new port group on the VDS even though they were the same VLAN ID. Was this meant was that when the vApp tried to power on again it was still looking for the port group from the old vSwitch and as such could not find it and could therefore not power on again.
To resolve this there was two simple settings to change:
First the IP Pool needed to be associated with the correct network again.
To do this, simply go to the datacenter in the vSphere client and then select the IP Pools tab. Then right click the Pool and select properties and then go to the 'Associations' tab and place a tick in the associated network for the IP Pool.
Second, the vApp itself needed to be updated to use the correct network for each of its IP settings.
Select the vApp from the Hosts and Clusters view and then in the summary tab, select 'Edit Settings'. Select 'Advanced' from the left menu and then select the 'Properties' box to reveal the 'Advanced Property Configuration' window as below.
Next just select each entry in turn and select 'Edit' to change the network to the correct value.
Once these settings were applied, then vApp could be started and it's IPs were once again allocated to each vm and all was well!
Simple error and all entirely self inflicted! Just be aware of vApps and their associated networks as these settings are not changed when you change the individual vms network settings.