20111020, 16:26  #694 
Jun 2003
2^{2}×3^{2}×11×13 Posts 
Sort of. The level of TF done does affect the optimal P1 bounds

20111020, 16:36  #695 
Jun 2003
491_{16} Posts 
It's not critically important. The client uses this information to compute the optimal bounds. If the client thinks the exponent has been factored less deeply than it actually has been (or will be, the order of factoring doesn't make any difference to the factors you will actually find), then it will choose somewhat higher bounds than is optimal.

20111020, 16:50  #696  
Jun 2003
7×167 Posts 
Quote:
In the case where the assignment is a P1 (rather than an LL assignment getting an initial P1) there is another issue: the more time spent on each assignment, the fewer the client is able to complete in a given period of time, and the more assignments which pass through to LL testing without having been P1ed first. Of these, about half never get a stage 2. This means that, in exchange for a slightly increased chance of finding a factor with the exponents we do test, we're losing even more with the exponents we don't. 

20111020, 20:56  #697 
Dec 2010
Monticello
5·359 Posts 
The correct optimality criterion is, for the vast majority of mersenne exponents, how to prove the most of them composite for the least amount of effort. Factors found per GHzDay is the correct metric.
Mr P1 points out that by doing relatively deep P1, we have many exponents not getting any stage 2 P1, which has a significantly higher return of factors found per time spent. Thus, exponents that could have had a factor found relatively easily are getting LL tested instead. This is also happening with TF, though in this case, the change is due to an increase in the ease of doing TF on GPUs. 
20111020, 21:32  #698 
"Kieren"
Jul 2011
In My Own Galaxy!
2×3×1,693 Posts 
BS extension
How does P95/64 indicate that BS has kicked in? Also, what is Stage 1 GCD?

20111020, 22:07  #699  
Jun 2003
7×167 Posts 
You'll see an "E=6" (or higher) in your results.txt file, if it fails to find a factor. For some reason it doesn't say when it finds one.
Quote:
Last fiddled with by Mr. P1 on 20111020 at 22:14 

20111020, 22:09  #700 
"Kieren"
Jul 2011
In My Own Galaxy!
2×3×1,693 Posts 
Thanks, Mr. P1!

20111020, 22:36  #701  
Nov 2002
Anchorage, AK
3×7×17 Posts 
Quote:


20111020, 22:51  #702 
Jun 2003
7×167 Posts 
Basically yes. Turning every computer that has enough memory to doing P1 is probably the best thing you could be doing for GIMPS. The only exception is if you have TFcapable GPUs. Currently the GPU factoring programs also need a great deal of CPU time (typically an entire core or two) to support the GPU. Depending of the specific work you do, this may be even more beneficial to GIMPS than devoting those cores to P1
Last fiddled with by Mr. P1 on 20111020 at 22:57 
20111020, 23:12  #703  
Jun 2003
7·167 Posts 
Quote:
Despite the logic, it "feels" wrong to deliberately reduce the bounds in any way, so I don't do this. A dedicated P1er with a reasonable amount of memory who uses prime95's default bounds calculation is making a contribution to GIMPS that is significantly greater than if he devoted his cores to LL testing. And that is good enough for me. 

20111020, 23:28  #704 
"Kieren"
Jul 2011
In My Own Galaxy!
10011110101110_{2} Posts 
Quote Mr. P1: "You'll see an "E=6" (or higher) in your results.txt file, if it fails to find a factor."
Ah! Like this: [Wed Oct 19 21:45:54 2011] UID: kladner/pod64, M52315441 completed P1, B1=610000, B2=15555000, E=6, We4: 498F4FED, AID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx [Wed Oct 19 22:32:00 2011] UID: kladner/pod64, M52310233 completed P1, B1=610000, B2=15555000, E=6, We4: 49964FA4, AID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx So given discussion just previous, perhaps I don't need to allocate quite so much RAM; perhaps instead dedicating another worker to P1 and spreading the benefits further. 