A prof once told us in class that Windows, Linux, OS X and UNIX scale on threads and not processes, so threads would likely benefit your application even on a single processor because your application would be getting more time on the CPU.
I tried with the following code on my machine (which only has one CPU).
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
pthread_t xs[10];
void *nop(void *ptr) {
unsigned long n = 1UL << 30UL;
while(n--);
return NULL;
}
void test_one() {
size_t len = (sizeof xs) / (sizeof *xs);
while(len--)
if(pthread_create(xs+len, NULL, nop, NULL))
exit(EXIT_FAILURE);
len = (sizeof xs) / (sizeof *xs);
while(len--)
if(pthread_join(xs[len], NULL))
exit(EXIT_FAILURE);
}
void test_two() {
size_t len = (sizeof xs) / (sizeof *xs);
while(len--) nop(NULL);
}
int main(int argc, char *argv[]) {
test_one();
// test_two();
printf("done
");
return 0;
}
Both tests were identical in terms of speed.
real 0m49.783s
user 0m48.023s
sys 0m0.224s
real 0m49.792s
user 0m49.275s
sys 0m0.192s
This me think, "Wow, threads suck". But, repeating the test on a university server with four processors close to quadrupled the speed.
real 0m7.800s
user 0m30.170s
sys 0m0.006s
real 0m30.190s
user 0m30.165s
sys 0m0.004s
Am I overlooking something when interpreting the results from my home machine?
See Question&Answers more detail:os