The Chronicle of Higher Education has an interesting discussion on the increasing convergence of university libraries and IT organizations. It is a Q&A session with Eugene Spencer, an independent consultant to U.S. colleges. The discussion describes possible mergers of university IT and library organizations. I think one of the important questions to many of the people concerned, is which of the two cultures will dominate in the new structure? I am aware of examples in Canada where the CIOs of universities have been librarians.
Thursday, January 31, 2008
The impact of ICT on the environment has very quickly become an issue, with such blogs as Bill St. Arnaud's Green IT/Broadband and Cyber-Infrastructure examining some of the issues.
Linux has long been touted (Linux-Ecology-HOWTO) as greener operating system, mostly through its ability to run on older hardware or smaller hardware giving the same performance as other OSes. These combine in extending the useful lifespan of hardware, reducing the amount of ICT-based garbage.
But Linux's power management for hardware is relatively poor, as compared to the Microsoft or Apple OS offerings. To remedy this, the Green Linux Workgroup was created in August 2007 to consolidate the efforts to improve Linux power management. At the same time, IBM anounced the "Big Green Linux" Initiative which includes the consolidation of servers. Their initial consolidation efforts include moving 3900 of their own servers to 30 System z mainframes running Linux. And the IBM's Linux Technology Center was involved in the energy-saving tickless Linux kernel released in April 2007.
These efforts are not lost on Linus: today he acknowledged (Torvalds: Linux ready to go green) the need for improving Linux Power management.
Bill St. Arnaud's post Upcoming Conferences and Studies on ICT and Global Warming lists a number of upcoming conferences on ICT environment issues, and I hope there will be some Linux representation extolling the benefits of Linux (where appropriate of course).
Wednesday, January 30, 2008
February 10 2008 marks the 10th anniversary of the release of XML 1.0 by the W3C (although some celebrate Nov 14 1996, the release date of the working draft while others celebrate the anniversary, August 1996, of the conference at which it was first discussed: hey, it's the Web: let a thousand flowers bloom!).
XML has had an incredibly wide and deep impact across industry, science, technology: it is ubiquitous. As a technology it has disrupted the database community and industry, the publishing community and others.
Many data standards groups in the sciences and in industry -- which previously spent their time developing byzantine formats for their particular information needs -- now spend their efforts on developing XML-based byzantine formats for their particular information needs. There are few science, arts, social science or humanities disciplines and industry sectors that do not have one or more FooML-specialized XML dialects for their needs, such as:
- Chemical Markup Language (CML)
- eXploration and Mining Markup Language (XMML)
- Geography Markup Language (GML)
- many, many others...
- Celebrating 10 Years of XML, IBM Systems Journal: Special issue
- XML entry at Wikipedia
- a History of XML
- XML Considered Harmful ;-)
The NSF announced today announced it would be awarding $50M to the iPlant Collective, a "dynamic web portal for the Plant Science Cyberinfrastructure Collaborative Community" to address the 'grand challenges' in plant science.
The effort will create a global centre bringing together (virtually and actually) computer scientists, information scientists and plant scientists to work on projects untenable until this project due to such issues as complexity, scale, discipline boundaries, lack of collaboration structures, etc.
The centre "...will bring together and leverage the resources and information generated through the National Plant Genome Initiative, enabling more breadth and depth of research in every aspect of plant science" and will serve as a model for other disciplines on how collaborative cyberinfrastructure can be applied.
Tuesday, January 29, 2008
I am sad to hear the Canadian government will be cutting the office and position of the National Science Advisor. Dr. Arthur Carty will be retiring in March 2008. He was first appointed in April 2004.
In a country where there is a distinct lacuna in the area informed political decision-making about and around science, the loss of this office seems an unfortunate step backward.
It may be that the science advisor office/position overlapped too much with the recently announced (Oct 2007) Industry Canada Science, Technology and Innovation Council. While this might be the case, it is not clear if the watering down of science into this council is the best thing for science in Canada.
Wednesday, January 23, 2008
Research Blogging (RB) is a new blog aggregator service oriented to the blogging about peer-review research. It offers a subject hierarchy by disciplines and tools for bloggers to make their blog more RB-aware. This includes tools for embedding citations (using COinS I believe) that allow RB to extract-out the citation information and display at the bottom of the aggregation summary. The citation creation tools will also auto-populate if you have the DOI for the article.
What a simple but great idea! Not everyone will be willing to hand-roll citations, especially those without a DOI. Perhaps a way to grab them from Connotea or some CiteULike is needed.
Tuesday, January 22, 2008
The W3C has published an early draft of the new HTML5 (press release). I have mixed feelings about this, but it is clear that to many that this is needed. There are still some major issues identified in the document in red. Hopefully HTML can move forward although some of the issues look like they may be hard to resolve. Browser wars again?? I hope not...
Also released today: HTML 5 differences from HTML 4
Monday, January 21, 2008
- Science that matters. Interview with Dr. Alastair Glass, Ontario Ministry of Research and Innovation.
- Supporting a knowledge economy. Interview with Richard Dicerni, deputy minister, Industry Canada.
- Cultivating science and technology. Interview with Dr. Pierre Coulombe, president of the National Research Council.
Thursday, January 17, 2008
Citation, Location, And Deposition In Discipline & Institutional Repositories: Recommendations for Data/Publication Linkage. Brian Matthews, Katherine Portwin, Catherine Jones, and Bryan Lawrence. Nov 11 2007.
"A key aim of the CLADDIER project is to investigate the cross-linking and citation of resources (in particular data and their associated publications) held in institutional and subject-based repositories within the research sector...Thanks to Peter Suber.
...Online repositories storing more dynamic digital objects gives the opportunity to provide a more complete picture of the relationships between them, with backward and forward citations to data and publications being propagated between repositories."
Posted by Glen Newton at 11:41
"Nature Publishing Group (NPG) announced today that is introducing a Creative Commons licence for original research articles publishing the primary sequence of an organism’s genome for the first time in any of the Nature journals."Of particular interest:
"Wherever possible, NPG will apply the Creative Commons Attribution-Non-Commercial-Share Alike licence retrospectively [emphasis added] to original research articles reporting novel primary genome-wide sequences that have previously been published in Nature journals."
Nature Editorial: Shared Genomes (Nature 450, 762 6 December 2007)
It is good to see Nature trying to fill some of the leadership void in this area.
Taxonomy, with a longer history in publishing, needs a similar effort, as pointed-out by Donat Agosti on the Taxacom mailing list (in reference to the NPG announcement):
"In the year of the 250th anniversary of Sytema [sic] Naturae , our goal ought to be to get all our publishers together to agree to release all the descriptions, if not all the publications including descriptions with a creative commons licence. The loss of species is fundamental to our welfare, and thus at equally important as our genes."Other viewpoints:
- Nature makes genome chain officially free. Information World Review. Jan 16 2008
- NPG introduces a CC license for genome research. Creative Commons. Dec 6 2007
- Nature Expands Open Access Publication for Genomic Research. PredictER Blog. Wednesday, December 5, 2007
- Nature opens up its genome sequencing literature. GMO Africa. Dec 12 2007
Tuesday, January 15, 2008
The Digital Curation Centre's (DCC) draft Curation Lifecycle Model is now available for comment. (Slightly) more information can be found in the paper Draft DCC Curation Lifecycle Model. Sarah Higgins. International Journal of Digital Curation, Issue 2, Volume 2, 2007.
I will be participating in this year's AAAS meeting in Boston with a presentation entitled "Canadian Initiative To Develop a National Strategy" at the session "Managing and Preserving Scientific Data: Emerging Perspectives on a Global Basis", moderated by Bonnie Carrol.
The other presentations:
- U.S. National Initiatives: Strategic Plan for Scientific Data Management and Preservation. Christopher L. Greer, National Science Foundation, USA
- U.K. Initiatives and Perspectives on Managing and Preserving Scientific Data. Liz Lyon, University of Bath, UK
- European Framework: Promote Access and Preserve Research Results for Future Generations. Carlos Morais-Pires, European Commission
The Science article NIH Announces Public-Access Policy describes the recently released policy update on NIH funded research publications:
Starting in April, most U.S. biomedical scientists will have to send copies of their accepted, peer-reviewed manuscripts to the U.S. National Institutes of Health (NIH) for posting in a free archive. If they don't, they could have trouble renewing their grants or even lose research funding.Here is the new policy: Revised Policy on Enhancing Public Access to Archived Publications Resulting from NIH-Funded Research, (Jan 11 2008).
Peter Suber has some excellent comments on this new policy.
Posted by Glen Newton at 14:48
Echoing Peter Suber's blog entry on this, here are the articles from the January 2008 issue of Journal of Library and Information Technology:
I must have been asleep this spring to have missed this interesting article, A Proposed Standard for the Scholarly Citation of Quantitative Data (M. Altman & G. King, D-Lib Magazine, March/April 2007, Volume 13 Number 3/4) which introduces a reasonable specification allowing the citation of quantitative data.
This is a needed specification, which will hopefully increase the real (and perceived) value of datasets to researchers, the people who evaluate them and the people who fund them. This allows datasets to be a measurable metric by which a researcher's performance can be measured: through usage and peer-review.
"We propose that citations to numerical data include, at a minimum, six required components. The first three components are traditional, directly paralleling print documents. ... Thus, we add three components using modern technology, each of which is designed to persist even when the technology changes: a unique global identifier, a universal numeric fingerprint, and a bridge service [emphasis added]. They are also designed to take advantage of the digital form of quantitative data.
An example of a complete citation, using this minimal version of the proposed standards, is as follows:
Micah Altman; Karin MacDonald; Michael P. McDonald, 2005, "Computer Use in Redistricting",
- Paskin, N. (2005) Digital Object Identifiers for scientific data. Data Science Journal Vol. 4 Paskin, N. pp.12-20.
- Kahn, R., Wilensky, R. (2006). A framework for distributed digital object services. International Journal on Digital Libraries, 6(2), 115-123. DOI: 10.1007/s00799-005-0128-x
Friday, January 11, 2008
In my rather hectic December I missed this publication on Dec 17 2007 of the ERC Scientific Council Guidelines for Open Access.
From the document's "interim position on open access":
1. The ERC requires that all peer-reviewed publications from ERC-funded research projects be deposited on publication into an appropriate research repository where available, such as PubMed Central, ArXiv or an institutional repository, and subsequently made Open Access within 6 months of publication. [Emphasis added]Thanks to Mary Zborowski, CISTI, NRC and CODATA for pointing this out to me.
2. The ERC considers essential that primary data - which in the life sciences for example could comprise data such as nucleotide/protein sequences, macromolecular atomic coordinates and anonymized epidemiological data - are deposited to the relevant databases as soon as possible, preferably immediately after publication and in any case not later than 6 months after the date of publication. [Emphasis added]
Tuesday, January 01, 2008
Association of Research Libraries (ARL) has just released the report "Agenda for Developing E-Science in Research Libraries". Among other activities, this report specifically discusses the Canadian context, including the National Consultation on Access to Scientific Research Data (NCASRD) and the recent (2007) Library and Archives Canada's release draft of its Canadian Digital Information Strategy.
Agenda for Developing E-Science in Research Libraries table of contents:
- E-Science: Implications for Research Practice
- National and International Context for E-Science
- Critical Areas for Research Library Engagement
- Data Issues and New Genres of Scholarly Communication
- Virtual Organizations
- Policy Development
- Current Library Capability to Support E-Science
- E-Science Task Force Recommendations: Outcomes, Strategies, Actions
- Structure and Process for ARL Agenda
- Develop Knowledgeable Community
- Develop Skilled Workforce
- Contribute to Research Infrastructure
- Develop Policy
From the executive summary:
Anticipated programmatic efforts would emphasize: education of the research library community about scientific trends, the emergent role of data curation, characteristics of virtual organizations, relevant policy for data and research dissemination, and tools and infrastructure systems. While the task force focused on e-science, it was mindful of the broader eresearch.