Two cases here:
In the general case where you are not setting container memory limits - then containers will just get as much memory from the OS as they ask for. Which is fine, so long as the OS has memory to give. So you probably don't care how much memory any container has consumed - only whether the sum of all container memory requests can be satisfied without impact. And yes, as you note above, the way to monitor this is at the OS level - if the OS has to resort to active swapping, then you have exceeded your memory capacity. The standard Linux Memory Usage datasource will monitor and alert you of this - you dont need (or want) to monitor swap usage and activity per container. If you do have memory issues, you can then use the Docker Containers Overview graphs to see which containers are using the most memory, and maybe limit them; maybe move them to a different container host; maybe just reconfigure the applications in them (reducing the max heap size if they are java apps, say.)
If you do care about the usage of a specific container (say - it's hosting an app known to have memory leaks) you could just set an alert on that instance of the Docker Containers datasource, when mem_working_set exceeds some limit. You probably would not want to express it as a percent of total OS memory - that's just another abstraction you dont need. (And the threshold would possibly need to be recalculated if you move the container to another host)
If you are setting container memory limits - cadvisor will report those memory limits. The Docker containers datasource, however, does not currently read the limits. It could be trivially made to do so, however, so comment if you want that. It could then compare used mem to the containers limit of mem, and alert if it gets too close - although I'm not sure what your actions would be, other than investigate before the container gets killed by hitting the limit and requesting more...