Why do we analyze the expected running time of a randomized algorithm and not its worst-case running time?
Even with the same input, the running time will be different.
When RANDOMIZED-QUICKSORT runs, how many calls are made to the random-number generator RANDOM in the worst case? How about in the best case? Give your answer in terms of
$\Theta$ -notation.
Worst:
Best: