Yes, of course, I understand all these arguments, but these are not the totality of computer science. If you distill CS down to the part were these statements are strictly true, what you have is mathematics. But computer science is just as much engineering, and it starts to be like saying that materials are irrelevant to bridge building. Take QuickSort, which is almost the canonical example of what you're talking about, in that it is independent of the language you implement it on. It works just as well in Lisp as in C as in Forth. While its algorithmic complexity is independent of set size, there is always a set size for which it becomes worse than useless. Why? Because as soon as the data doesn't fit into the memory of a single machine and you're into a swamp of real world concerns. The big O doesn't change, but as soon as you need more than one machine, those pesky constants do, by about five to six orders of magnitude! If you want to sort trillions of things, you need to understand the physical machinery that the algorithms live in, with all the complexity of networks, and the properties of hardware. For large enough datasets, reliability and failure recovery, rather than the algorithm proper, becomes the issue, because there are so many parts in play that something will probably break. These aren't technicalities to sneer at--the world is actually full of problems that are so large that they can only be done inefficiently. If you crack "The Art of Computer Programming" almost anywhere, you will find the connection between physical machines and algorithms, either explicitly or implicitly. And DK is about as mathematical as programmers get!