When Google and IBM last October announced their Internet-Scale Computing Initiative - which looked to dedicate a cluster of 1600 computers for the use of researchers, for free - it was not clear (to me) whether this was a U.S.-only initiative, or was also available (or would eventually become available) to non-U.S. researchers:
The University of Washington was the first to join the initiative. A small number of universities will also pilot the program, including Carnegie Mellon University, Massachusetts Institute of Technology, Stanford University, the University of California at Berkeley and the University of Maryland. In the future, the program will be expanded to include additional researchers, educators and scientists.
Now with the NSF's announcement that they are partnering with Google and IBM in this initiative in what they are calling the Cluster Exploratory (CluE), it is even less clear (or maybe more clear that is only available to U.S. researchers??), with the NSF responsible with selecting who can use the resource:
"NSF will then select the researchers to have access to the cluster and provide support to the researchers to conduct their work."This initiative is built using Apache Hadoop (primarily a Yahoo project), which includes Open Source implementations of Google's MapReduce and GFS. With more supercomputing / cloud computing resources going commodity, more researchers will be altering their compute job implementations to be more MapReduce-friendly.
Related: Yahoo's Doug Cutting on MapReduce and the Future of Hadoop