[olug] Linux Journal - Nov 2010 - LHC
Carl Lundstedt
clundst at unlserve.unl.edu
Thu Oct 21 02:06:35 UTC 2010
On 10/20/2010 5:26 PM, Kevin D. Snodgrass wrote:
> --- On Wed, 10/20/10, Carl Lundstedt<clundst at unlserve.unl.edu> wrote:
>> Linux Journal has made this article
>> free to read:
>> http://www.linuxjournal.com/content/the-large-hadron-collider
>>
>> Cheers,
>> Carl Lundstedt
>> UNL
> I love it! "PhEDEx"
>
> One thing, in the article you mention some software analogous to "SETI at home" (or folding at home, which I used to run) called OSG, but for university computing centers. Has anyone cosidered extending that to an actual public client, i.e. LHC at home or maybe HiggsBoson at home? (GodParticle at home?)
>
> Kevin D. Snodgrass
OSG is used to share the resources of a campus cluster. It uses globus
and a pretty heady middleware stack. It presumes a Linux environment, a
place to install software and that the job will basically own and not
be evicted from. OSG is for dedicated sites.
There's something out there called Einstein at home for scientists to get
cpu hours off volunteer machines. We sometimes get jobs that start
einstein at home on our workers, do their work, then leave. Einstein at home
is getting some use.
There is also an LHC at home (http://lhcathome.cern.ch/), but the CMS (my
experiment) software stack is really complex (both because of what it is
and because of who wrote it, physicists aren't the best coders). Right
now a user analysis job assumes that it will have access to 1.5 GB of
ram, 10 GB of disk, that's a little much to ask of volunteers. That's
not the real challenge though. A standard CMS data file is 2GB, so
there's a ton of bandwidth required in moving the data around. That's a
tall order to ask from both the volunteer and the experiment. I know
I'd hate to serve a file to a machine and have the job killed before it
could finish. The opportunity for waste is currently too high (IMO).
Right now the computing model has the jobs chasing the data. There are
efforts to invert the model and have the data chase the cpu resources.
IF that inversion takes place it might be time to look at trying to use
commodity, volunteer cpus.
Thanks for the interest,
Carl
More information about the OLUG
mailing list