Interesting, thanks. And yeah, it would have surprised me, if it was possible to do so without finding the smaller primes.
My intuition tells me that one could probably prove that the computational complexity can’t be lowered.
I’m sure, there’s already a proof that there are infinite prime numbers, because well, by multiplying whole numbers, you can’t fill out all gaps in a number line.
And if you look at it like Erastothenes does, iterating from 2 towards ∞, then anytime you successfully find a prime, your prime detection function f(x) needs to be extended to also check for that new prime number as potential divisor.
This feels like a self-extending rule system, where the rules depend on the (partial) solution. So, presumably by definition, you have to do the partial solution for all numbers that could be relevant for the rules of a given number, which is the normal bounds of Erastothenes.
Interesting, thanks. And yeah, it would have surprised me, if it was possible to do so without finding the smaller primes.
My intuition tells me that one could probably prove that the computational complexity can’t be lowered.
I’m sure, there’s already a proof that there are infinite prime numbers, because well, by multiplying whole numbers, you can’t fill out all gaps in a number line.
And if you look at it like Erastothenes does, iterating from 2 towards ∞, then anytime you successfully find a prime, your prime detection function f(x) needs to be extended to also check for that new prime number as potential divisor.
This feels like a self-extending rule system, where the rules depend on the (partial) solution. So, presumably by definition, you have to do the partial solution for all numbers that could be relevant for the rules of a given number, which is the normal bounds of Erastothenes.