Hi dediosj,
Welcome to the Communities! Since this is your first introduction to VMware, let me clear up a couple of items. You mentioned VMware Workstation, but keep in mind that Workstation is not used to run production VMs. ESXi is the version you would use to get the best performance and reliability. What may be confusing in your google searches is that consumer-grade products like VMware Workstation and VMware Fusion are popular for running virtualized versions of the ESXi hypervisor. However, this is only for lab purposes or development. So when you see mention of installing ESXi on Workstation, just remember that's for lab / testing only and you won't run real VMs like that. Installing ESXi on Workstation is often referred to as Nested virtualization, as you can then run VMs on a virtualized ESXi VM (i.e. virtual inside a virtual). Ok, I'll stop there as it may get confusing... but you will learn quickly. There's nothing overly difficult about using VMware.
Baremetal vs Hosted
ESXi is a "baremetal" hypervisor, meaning it sits right on top of the hardware. Both Workstation and Fusion are known as "hosted" solutions and they are installed as an application on top of an existing operating system (i.e. on top of Windows, MAC, linux, etc.). In contrast, VMware ESXi is the operating system, and will be installed directly onto your new hardware with no other operating system below it. Once you've booted your new Hardware to an ISO or CD and installed ESXi, after some configuration you can then run VMs on it.
Redundancy / Clustering
You mentioned you will start with buying one server, then adding another later. I recommend purchasing both servers at the same time if possible so you can make mistakes and learn how the clustering works (it's easy don't worry) before placing production workloads on there.
VMware HCL
Perhaps the most important thing is ensuring that the hardware you buy is on the VMware Hardware Compatibility List. Also the NAS device you choose will be important, though that is not listed on the HCL (it's up to you). You mentioned that you are considering splitting the disks between local and NAS storage. Although that would work, it's generally not a good idea. Pick one option (either local disk or remote NAS) and commit to making that good.
Memory
There is no hard requirement for memory (other than the minimum requirements at install time which is currently between 4 to 6GB required to install ESXi). With that said, you should plan on the ESXi hypervisor using about 4GB of RAM. You can of course add more VMs than you have memory (known as over-committing) but that could cause performance degradation (balooning, swapping). I would recommend a minimum of 64GB for your config. More if possible. Remember you need to size your 2 host cluster so that in a failure scenario all VMs could run on one host if needed (handy for maintenance too).
P2V
Since you will be migrating your existing physical servers to VMs, you can use VMware Converter to P2V them. The only exception is the domain controller which you should build a fresh VM and join it to your existing environment, then promote it. Once all looks good you can decom the physical, or keep it as a secondary. It's a good practice to keep a physical DC around if possible but that's becoming more and more rare nowadays. For example, my company is 100% virtualized. We don't have a single physical server that isn't running ESXi. It takes time though to build up your experience and confidence to do that.
Important
You will also need to learn how to configure timesync on your ESXi hosts and VMs. This is critical to your success.
Getting Started
Anyway, you will mostly learn from experience. Just go to https://my.vmware.com and download the evaluations you want; they are all 60 day full trials with all features. If that runs out, you can login with another email address (i.e. gmail, etc.) and get another 60 day evaluation key and rebuild your test ESXi hosts again. In the end, you can go with the totally free version of ESXi, but that gives no access to APIs and does not provide redundancy. You will probably want to install 2 ESXi hosts and a vCenter server. The vCenter can now be deployed as a virtual appliance known as VCSA (vCenter Server Appliance) or you can do it the old fashioned way where vCenter is installed on a Windows VM and point to a local SQLexpress (comes with it) or point it to a SQL VM.
Networking
As for network cards, just make sure they are on the HCL (Intel and Broadcom are the popular ones). Most 10Gb shops are using only 2 NICs per host. If you will be deploying Gb Copper (i.e. 1000Mbps) you may go with anywhere from 4 to 8 NIC ports total typically. Ideally this should be spread across multiple physical cards to avoid single point of failure. Really, it's totally up to you how you want to lay it out and depends how you want to divide your traffic. Multiple functions can be blended together on the same interfaces (i.e. Management and vMotion) but typically Storage traffic (i.e. NFS or iSCSI) go on a dedicated pair of NICs. The VM network can consist of as little as 2 interfaces up to the theoretical maximums.
Network Example
I dug up an old copper example I did a while back. I made an effort to cleanse it where it's blacked out but you should get the idea of the main things we look at when designing the networking. I think this was one of the first iterations before I confirmed the actual PCI slot numbers from lspci so for the discerning HP advocates out there don't bite my head off if the slot naming is off . Overall the idea of the design should show what I'm trying to express.
Well that should be enough to get you started. I don't want to overwhelm you with too much (maybe too late lol). Hopefully some others can help out here by adding feedback, links, videos, blogs, etc. that are useful in learning. Of course the official VMware Education classes would be a great investment (i.e. install configure manage) and really is mandatory before going production.