Tuesday, August 30, 2011
by Tim Stephens at UCSC.
According to coauthor Piero Madau, professor of astronomy and astrophysics at UC Santa Cruz, the project required a large investment of supercomputer time, including 1.4 million processor-hours on NASA's state-of-the-art Pleiades supercomputer, plus additional supporting simulations on supercomputers at UCSC and the Swiss National Supercomputing Center.
“The simulation follows the interactions of more than 60 million particles of dark matter and gas. A lot of physics goes into the code--gravity and hydrodynamics, star formation and supernova explosions--and this is the highest resolution cosmological simulation ever done this way,” said Guedes…
By Stacey Higginbotham at Gigaom.
A few million Americans may find their YouTube requests get delivered faster on Tuesday as Google, OpenDNS, VeriSign and several content delivery networks announce the Global Internet Speed Up effort.
At the center of the partnership between DNS providers and participating CDNs is the creation of a standard that attached location data to a DNS request so a user’s request for content goes to server nearby. Typically, a CDN or content provider routes a user based on the address of the DNS server, as opposed to the user’s location, but they aren’t always in the same region.
For now, only users of Google’s Public DNS service, OpenDNS and Verisign will send out DNS information with a snippet of information gleaned from the user’s IP address. That will help the domain name servers that direct traffic around the web to send that traffic the closest provider. As for privacy concerns about attaching IP addresses to a DNS request, Ulevitch says the information only goes to companies that would see the IP address in a typical HTTP web request, so it’s not sharing any more information than is typical.
Monday, August 29, 2011
Large Scale Hadoop Data Migration at Facebook. Paul Yang describes moving Facebooks 30-PB data via replication across datacenters. (Previous Post)
As the majority of the analytics is performed with Hive, we store the data on HDFS — the Hadoop distributed file system. In 2010, Facebook had the largest Hadoop cluster in the world, with over 20 PB of storage. By March 2011, the cluster had grown to 30 PB — that’s 3,000 times the size of the Library of Congress! At that point, we had run out of power and space to add more nodes, necessitating the move to a larger data center.