According to Youtube channel 'Dave's Garage' Docker containers can be up to 15% slower than running the same tool on bare metal. He also tried VMs on the same hardware and performance losses were between 1 and 3%.
So containers are most of the time not the smartest choice, if pure performance is your goal. Containers do make things more convenient.
Containerization is actually a pretty old concept, thought up around 20 years ago. Meaning it is not something new, so you could have known about it already.
Containerization is not like sandboxing, but if one would need to describe it in a few succinct words to a layperson, then perhaps you should mention sandboxing. Container software, like Docker, embed themselves into the operating system at a pretty low level. It detects the hardware resources that are available and assigns these resources to the active containers as they need those resources.
Having a few active containers, you can be sure that the container software is managing the containers as efficient as possible withing the resource parameters you have configured. The operating system does come with such functionality, and it is good at it. But processes can sometimes bring the whole system down. Much less of a problem with containers.
Portable apps and sandboxes require to be run inside the operating system. Containers, in essence, do not.
Containers can easily be cloned and recreated, by scripts if you so prefer, hence these are ideal for automating the deployment of a test-/acceptance-/production-environment, adding and removing compute capacity when you/your company needs it. Making containers ideal for cloud computing and/or the enthusiast that owns his/her own testlab at home.
If you decide to get your feet wet and start playing with Docker, setup a Linux VM in your Windows installation and install Docker in that VM. Docker works very well in Linux. They do create a Windows installer of Docker, but that is not nearly as good an experience as the Linux version is. If you think that the Docker version for Windows works well, than you haven't seen it in action on Linux. The Docker installer also affects what other software you can install in Windows, especially with regards to virtualization and networking functionality.
If your Windows is modern enough to run WSL2 from Microsoft, you could use that to install the Linux version from Docker.
Containers, when setup well, occupy a lot less storage space on your drives than VMs do. Containers and Virtual Machines have very much in common, but containers have the advantage of occupying (much) less storage space. However, that is not always the case, as not every containerized project is setup that well.
Containerization is a pretty well thought out concept for the Linux operating system. Windows and MacOS are much more lacking in that regard.
While a Docker container will work on Linux as well as on MacOS or Windows, other forms of containers do not. At least, I have not heard of them, but I have been playing with these things for a month and a half myself.
'LXC' and 'AppImage' are more mature Linux containers. And it doesn't matter which distribution of Linux (Debian, Ubuntu, RedHat, OpenSuse, Arch, Gentoo etc.) that these containers are executed on, they simply work. No need to install anything, just execute and you have working software.
Docker, Flatpak, Snap all require you to install their software first and each system is not interchangeable.
Now, if you are playing around with 'Proxmox' (open source virtualization software that put many similar commercial offerings to shame), You can directly run LXC containers as a VM in Proxmox. That is more or less all the advantages from containers and VMs all rolled into one.
I haven't played around with Kubernetes, and I only know that Google uses this software to run all the services they provide to end-users. I know that Google uses the most standard computer hardware as much as possible inside their server parks all over the globe. Much cheaper and easier to repair/maintain than server grade hardware, which can be hard to get and is always very expensive to buy. Running Kubernetes divides the workload of all their services over the hardware that is active. So when some cheap server fails, Kubernetes diverts the Google services to the remaining hardware without the end-user knowing there was any kind of hardware problem.
Haven't learned much more than that about Kubernetes.