Reducing the write cycles on a SSD is still a wise thing to do.
In my system there is boot SSD drive from 120GByte, which is split into 3 partitions. 1 is the tiny one Windows itself creates for it's boot procedure. 2 is the C:\ partition, which is 30 GByte (between 7 and GByte is free), dedicated to Windows itself. 3 is the D:\ partition that holds my portable apps and program files. 4 is a 10GByte section of empty space, to be used by the drive for error management.
Then there is a 3TByte SATA drive for my data, but also a partition that contains a page file with a static size of ((2 x amount of RAM in the PC) + 20%) and a set of portable drives for backup purposes.
NTFS is the most common file system on Windows. It performs best when it's partitions have between 10% and 20% of free space. But more free space is preferred. Making partitions helps you to achieve that goal.
Also, NTFS is a file system that makes a mess of how it stores files on disk. That is by design. Which is why it needs a relative big chunk of free space and file management (defragging) to keep up performance. This makes it faster than the standard EXT3 or EXT4 file systems on Linux, for example. But only when there is enough free space available. And when there is insufficient free space available, performance drops below the performance of Linux file systems quickly. These Linux file systems fragment much slower and suffer much less performance problems when drives are being filled to the brim. That is also by design.
Severely simplified: NTFS packs files very close together, which results in less 'travel' of the hard disk heads. Which is in essence a good idea, but only with static files. When files grow or shrink, this dense packing results in files being chopped up, making the hard disk head travel more, instead of less. The Linux file systems spread out files, which initially makes the hard disk head travel more. But files don't fragment that quickly this way, because there is room for them to grow or shrink.
More modern file systems follow the design ideas of the Linux file systems more closely as these give you a stable performance. And their extra performance comes from better interaction with the operating system and smarter ways to handle the actual reading/writing of files.
Yes, partitions create artificial limits on your drive(s), which may cause you problems along the road if you didn't properly set the partition sizes for the tasks you have intended for the computer. But partitions make the background maintenance NTFS needs much easier on your system. It saves wear and tear on spinning drives and keeps things organized.
There is also another consideration. Especially when virtual machines are being used. Keeping Windows separated from (portable) applications and user data gives you a very clear advantage. Most virtual machines offer you to assign partitions from the host to each virtual machine you create. Installing Windows and especially configuring such a VM can be quite time-consuming. But with portable apps, you can cut out the time spending on configuring. Assign the correct VM drive letter (in my case D:\) to the host partition and every shortcut works just as well on the host as in the VM. You also can continue with your tasks in the VM right where you left them off on the host. It also saves a ton of storage space on the host, which reduces time you need for creating backups considerably as these can be a lot smaller.
Not making partitions makes life easier during setup of your computer. Afterwards it adds (unexpected) complications. You are spending much less time setting up systems than using them, just saying. And yes, Microsoft is hell bent on dumping everything in C:\ . That sense of initial 'simplicity' is a false one, creates lots of opportunities for MS and 3rd parties to sell you software for helping out with complications they created themselves.