"Best Practice" says to have a dedicated nic for management, and another dedicated for vmotion. Ideally different subnets/vlans.
In smaller environments, though I often will create this:
vswitch0 with 2 nics (hopefully on seperate cards/asics) and one vmkernel port with both management and vmotion. This works just fine, thank you despite sometimes being described as not being "best practice". Oh well - I think the concern is that in heavy vmotion situations (especially when storage vmotion is concerned) management traffic could be impeded/swamped. I've just never seen it in the real world, though in environments with more than 4-5 hosts I always set it up according to "best practice" just because ....
vswitch 1 with 2 nics, 2 vmkernel ports (each with own ip address) for iSCSI
vswitch 2 with 2 (or more) nics and however many virtual machine ports/vlans are needed.
(just to be clear, the "best practice" would have vswitch 0 with 2 nics, and 2 vmkernel ports one configured as management and the other as vmotion. Each nic would be dedicated to one vmkernel, but available as failover for the other ...)