The following is my answer to Project Euler's third problem. Here's the python implementation:
def isPrime(n):
for i in range(n/2)[3::2]:
if n % i == 0:
return False
return True
def largestPrime(n):
for i in range(3, n/2):
if n % i == 0 and isPrime(n/i):
return n/i
return -1
largestPrime(600851475143)
And here's the same implementation rewritten in C++:
#include <iostream>
using namespace std;
bool isPrime(long n)
{
for(int i = 3; i < n/2; i+=2)
{
if(n % i == 0)
return false;
}
return true;
}
long largestPrime(long n)
{
for(int i = 2; i < n/2; i++)
{
if(n % i == 0 and isPrime(n/i))
return n/i;
}
return -1;
}
int main()
{
cout << largestPrime(600851475143) << endl;
return 0;
}
When I run the C++ code, it computes the correct answer (6857) within seconds. But when I run the python code, it seems to go on forever. What's is it that python performance is so poor here?
Pythonis the guy who opens the door and gets shot.C++is the one who knocks.rangework with 64 bit integers?sizeof(long)for your C++ program? If it's 32 bits, the number you're passing tolargestPrimewill be truncated, and if the answer's correct it's likely the the algorithm coincidentally doesn't need the the higher order bits, but it might not work reliably for all other numbers. Python - on the other hand - uses something logically akin to the C++ GMP library to handle arbitrary-length integers, albeit much slower.a<=b. In that casea*a<=q. Why then do we need to checkq/2?xrangeinstead ofrangein the Python code. Otherwise, you will get a huge runtime only due to huge memory allocations for those large ranges. Thexrangegenerates the numbers as you go, as does the C++ for-loop. With that change, when I run the code, I consistently see the Python code being 5 times slower than C++, which is expected and matches other simple benchmark tests like these that I have seen before.