... personal wiki, blog and notes
Bryan's Blog 2005/08
For years I've been using the term "op cit" to do citations, and wondering what it was actually an abbreviation for (some of us got no Latin at school). It turns out that it is
short for the Latin phrase opera citato, meaning "in the work already cited."
which is of course how I use it, but it's nice to know the actual latin, and has the bonus of helping me understand the word opera as well :-)
by Bryan Lawrence : 2005/08/25 (permalink)
Windpower: Local Solutions to avoiding burning carbon
I've mentioned in the past that I'm interested in small scale wind power generation. It strikes me as crazy that many of those advocating the importance of windpower seem to think that it can only be done with massive wind farms. There are many of us who live in windy locations who could generate a significant amount of local power with small wind turbines. Done right (i.e. as part of the house structure) it ought to be relatively inoffensive to look at (we're used to chimneys already) and harmless to birds (or not any worse than our large glass panels which take out the odd bird from time to time ... certainly there are no large raptors flying around our house).
So I'm glad to see advent of just such a house-scale wind turbine. There are some obvious questions first:
What will it cost?
How noisy will it be?
Will I be able to feed back energy into the national grid?
How windy is my property?
(Update) Do I need, and if so, will I get planning permission?
The windsave site is rather coy about the first question, but it's not quite an issue yet as I wouldn't be planning to do anything about it til next year (there are only so many house based projects one bloke can do with a new child and a full time job). They do point out that there are some quite good subsidies available though ...
The answer to the second question is quite comforting:
Free spinning (loudest noise potential)
5 metres behind blades gusting to 5m/s /12miles per hour, 33.0 dB
5 metres behind blades gusting to 7m/s / 16 miles per hour, LAeq 52.0 dB
3 metres behind blades, height 1.5m background noise LAeq 36.0 dB
Compare these with typical fridge/freezer when running of between 40 and 50 dB. But I guess it would be interesting to know what it would sound like with winds at 30 mph (maximum generation) and 50 mph+.
Apparently the energy can feedback into the national grid, although it's likely the average house "on idle" will use as much as it can get from these systems (given it's rated at about 1kW/h at not far short of full speed). It's also not obvious there will be any easy way of metering that ...
The second to last question is something I can maybe do something about ... it might be interesting to see a map of mean and standard deviation wind speed for the UK. The obvious problem is that one needs the wind info at the microscale, and we only have the data at a much larger scale, but it'll still be interesting. Now to find the time ...
(Update continued) The planning permission issue will be interesting too ... given one needs permission for a second satellite dish, it would seem likely that councils will try and get involved ... but I hope they wont be obstructive. If house scale wind turbines become common, they'll be just as much part of our normal housescape as chimneys.
The bottom line though, from their web site is this:
There are approximately 23 million households in the UK today. If just 2 million of them - roughly 10% - were to have a micro-wind generator installed on their roof, that would take a potential 1,000,000 tonnes of CO2 out of the environment each year!
Having spent the last four years as part of the UK e-science community, and buying into the wonders of what "the grid" can do for me, I've just spent a bit of time looking into exactly what is in the ISO19119 standard for geographic information services. Despite my colleague Andrew Woolf telling me how important this was, I've never really looked into this standard (despite running round banging on about how important standards are, and going on about how good they are in the metadata world, I hadn't had time to read this one ...). Anyway, it turns out to describe what most of the e-science world has been reinventing, and redocumenting in an adhoc manner for the last few years. Oh why oh why do standards organisations hide their wonderful products so successfully that no one uses them?
I thought the following was an inciteful analysis of patterns of workflow management. Essentially, they describe three types of workflow management:
Transparent: user sees all of the services
Translucent: workflow aids the user
Opaque: aggregate service hides services
These three modes are described in three UML diagrams in the standard, only one of which is necessary to get the flavour, e.g. the translucent case:
The transparent case differs in that the workflow is set up by the user after looking at catalogue descriptions, and the opaque case the catalog describes the result of a series of workflow activities which are, as the name suggests, opaque to the user.
While this isn't a great leap forward, I claim it's inciteful because it discriminates simply between the various things we as a community have been building, and clearly identifies how workflow will be useful to us. Now, if only someone was building workflow engines based on the ISO service metadata descriptions in this standard ...
When will the oil run out?
A few days ago I reported articles in Science on the rate at which oil might run out. Today I found this which states that the Saudi's are saying that OPEC wont be able to support demand in a mere ten to fifteen years. (I haven't read the entire article as I don't have access to an FT subscription).
This at the same time as
Combined production of crude oil and liquids by some of the world?s largest non-OPEC oil companies declined 0.2% in the first half of 2005 compared to the same period in 2004.
(source: Green Car Congress)
Maybe this is going to be a problem before I get to retirement ...
Nature DOI Failure
It's not very professional for a journal with such a great reputation.
by Bryan Lawrence : 2005/08/11 (permalink)
Metadata, XML and Deja Vu
It's funny how some concepts and issues are repeated in multiple communities. Recently I attended a meeting called "Activating Metadata" (agenda,talks) held at NIEeS. The sharp eyed amongst you will have noted that the link for the talks is .../metadata2 ... talks from an earlier event are here. There have been other meetings on a similar theme which don't seem to be archived.
These metadata events are both stimulating and boring in equal measures. Regrettably we go over much the same ground every time (boring), but I learn new things when I meet new communities (stimulating). Fortunately this second meeting was a very different community, being primarily geographers and users of geography data (in a loose definition of the sense). Of course they have their own vocabularies, and needs for specific metadata.
Anyway, there seemed to be a feeling amongst some that metadata standards were a hindrance to academic use of data, and that more "of the right sort of metadata" was needed. While I would obviously argue that more metadata is needed in just about any context, the usual argument that "the standards didnt do it for me therefore I wouldnt use them" was frustrating. Since the meeting, I've written to one of the participants, and amongst other things I wrote:
There are solution frameworks out there, and if one doesn't want to use them, one simply contributes to the proliferation of options and consequent user confusion. Most importantly, one needs to understand that standards exist not as an effort to constain all possible metadata, but to constrain those elements where there is a chance of commonality (and interoperation between groups). You're free to produce whatever else you like that is relevant to your own user community.
In quite a different context Dare Obasanjo of Microsoft has been discussing the new buzzword microformats of XML. This discussion is about whether or not folk should be introducing their own tags into XML documents (with or without namespaces). There seems to have been much blognoise on the topic, but I think Derek (Only This and Nothing More) got it right:
Ever sit down at a table with a number of experts in a field that you do not know? They may be speaking English, but that doesn?t mean you understand what they are talking about. If you try and force them to speak in laymen?s terms, the efficiency of the information exchange drops dramatically. Specific languages are sometimes necessary. Individual specialties within Math and Computer Science all have customized definitions of terms, that sometimes conflict. Each specialty evolved it?s terminology to enable efficient, unambiguous communication between specialists in that field. Custom grammars are a necessity for efficient communication. Language reduces to the least common denominator of the intended listenership. If an application expects generic tools to process it's data, then it should use a well known standard. If local efficiency (or development or data) is more important, then use custom formats.
Which reads like just the same thing I was saying in the metadata context, hence the Deja Vu.
Metadata standards such as ISO19115 are being designed with just this structure in mind. Indeed, ISO19115 has the following view of how it should be used:
Similarly, XML documents are built with xml namespaces in mind, and so the XML syntactical rendering of ISO19115 (the almost mythical ISO19139) will be built to allow communities to build such application profiles. I think that's what my geographer friends and the xml microformatters need to do: Build application profiles of existing standards, with all their own information built in as extensions over the core. Sure not every application will know what it's about, but all applications conforming to the core standards ought to be able to recognise elements in common, and specific applications exploit the extra information.
How one stores this information is a moot point, flat files, databases, GIS systems whatever, but when we exchange the stuff it's going to have to be with XML. And with XML, we'll use namespaces. We fully expect the application profiles of standards to pull in stuff from other namespaces ...
... which in the microformatter argument leads to questions about whether or not entries should have the same names in all application profiles. Of course not! We just don't all use the same names for everything, but where we do have communities intersecting we can build ontologies (which are just sophisticating mappings between terms that we understand). We can then use tools to do conversion between documents, exactly as argued in, for example, Dare's first article linked above. But, I think the microformat principles linked in the definition above, hold well in the scientific metadata world too:
solve a specific problem
start as simple as possible
design for humans first, machines second
reuse building blocks from widely adopted standards
modularity / embeddability
enable and encourage decentralized development, content, services
More on the Plextor
Some more things I've learned. Firstly, that Suse 9.2 and Suse 9.3 use exactly the same versions of growisofs and makeisofs. So the difference in the errors reported earlier under Suse 9.2 and Suse 9.3 is probably about permissions, given cdrecord showed something rather different between root and normal user access under 9.3. So, under 9.2:
Run k3b from root terminal, burn an existing image. Yes it works. Both DVD-R and DVD+R.
Run k3b as root from user process, and try and write on the fly. Fails.
Run k3b as user, and create the image and then write as part of one session, but with growifs and makeisofs suid root. Fails, but with the Suse 9.3 error, not the permission error.
Write the iso image and then use k3b to write it, as a normal user (but with all that stuff setuid still there). Works.
Turn off the setuid, fails.
Turn on setuid only for growisofs, now works from iso image as a user, but not on the fly.
Now, I know that my colleague is runnnig k3b 0.12.2, with the same growisofs, but perhaps different tools (and kernel) underneath. More to investigate.
Broadband from Satellite
I'm still a satellite broadband user, and although things have been better since I last reported. I regularly get 512 kb/s and sometimes 2 Mb/s, although nearly as often the system appears overloaded and completely unresponsive.
It would appear that I haven't much to look forward to. Too far from the exchange for anything meaningful (i.e. 512, I see no reason to have 256), and nothing technological on the horizon.
If I lived in Japan, things would be different. Space Daily is reporting that a new satellite is planned that will
make it possible to send and receive data at a maximum speed of 100 megabits per second in mountainous areas and remote islands, as well as aboard Shinkansen bullet trains, airplanes and ships...
The satellite is due to be in service by 2015, and in terms of signal strength, will even allow mobile phones to communicate at 10 Mb/s!
plextor PX-716UF Woes Under Suse 9.2 and 9.3
I do all my computing on my laptop, and have about 10 GB of user files on board (including about 2.3 GB of mail files). Obviously I care very much about backup, and I usually backup via rsync to a badc server. My backup server broke last week and is still not up. For that, and other reasons (I want physical copies and to be able to backup my laptop at home), I purchased a usb/firewire dvd writer. Herewith my experience.
This device supports both firewire and USB2.0, but the firewire connection isn't hot mounted, and it's not obvious what to do with it. So, moving to USB2.0, and plugging it in, we have from dmesg:
Vendor: PLEXTOR Model: DVDR PX-716A Rev: 1.03 Type: CD-ROM ANSI SCSI revision: 02 sr0: scsi3-mmc drive: 40x/40x writer cd/rw xa/form2 cdda tray Attached scsi CD-ROM sr0 at scsi1, channel 0, id 0, lun 0 Attached scsi generic sg1 at scsi1, channel 0, id 0, lun 0, type 5 USB Mass Storage device found at 8
which I interpret to mean that the raw beastie is at /dev/sr0, although I'm a bit concerned about all this scsi emulation stuff since I'm told that we don't use the scsi emulation with 2.6 kernels ...
Under Suse 9.2, k3b 1 recognises the device, and the dvd-r media, but fails at write time with
growisofs ----------------------- :-( unable to open64("/dev/sr0",O_RDONLY): Permission denied growisofs comand: ----------------------- /usr/bin/growisofs -Z /dev/sr0 -use-the-force-luke=notray -use-the-force-luke=tty -use-the-force-luke=dao -dvd-compat -speed=1 -gui -graft-points -volid Backup Mail+CEDAR -volset -appid K3B THE CD KREATOR VERSION 0.11.15cvs (C) 2003 SEBASTIAN TRUEG AND THE K3B TEAM -publisher Bryan Lawrence -preparer K3b - Version 0.11.15cvs -sysid LINUX -volset-size 1 -volset-seqno 1 -sort /tmp/kde-lawrence/k3bmHQT3a.tmp -rational-rock -hide-list /tmp/kde-lawrence/k3boB3OIb.tmp -full-iso9660-filenames -disable-deep-relocation -iso-level 2 -path-list /tmp/kde-lawrence/k3bctpRtc.tmp
(K3b Version:0.11.15cvs,KDE Version: 3.3.2 Level "a",QT Version: 3.3.3)
Under Suse 9.3, k3b recognises the device, and the dvd-r media, but fails at write time with
OPC failed. Please try writing speed 1x. Fatal Error at startup: Input/Output error
(but I had it set to 1x speed!!!). Show details gives me
... /dev/sr0: engaging DVD-R DAO upon user request... :-[ PERFORM OPC failed with SK=5h/ASC=2Ch/ACQ=00h]: Input/output error growisofs comand: ----------------------- /usr/bin/growisofs -Z /dev/sr0 -use-the-force-luke=notray -use-the-force-luke=tty -use-the-force-luke=dao -dvd-compat -speed=1 -gui -graft-points -volid K3b data project -volset -appid K3B THE CD KREATOR VERSION 0.11.22cvs (C) 2003 SEBASTIAN TRUEG AND THE K3B TEAM -publisher -preparer K3b - Version 0.11.22cvs -sysid LINUX -volset-size 1 -volset-seqno 1 -sort /tmp/kde-bnl/k3bAxKrpb.tmp -rational-rock -hide-list /tmp/kde-bnl/k3b2Hb2fa.tmp -full-iso9660-filenames -disable-deep-relocation -iso-level 2 -path-list /tmp/kde-bnl/k3bPRTK8b.tmp
So I tried burning a CD (from Suse9.3), and that works (very quickly)! So it's not connectivity of any sort. It's something dvd-acious ...
I then took the physical device to a colleague running redhat, using xdcdroast (version 0.98 with some patches apparently) which on his system used the device argument(dev= "/dev/scd1") to Cdrecord-ProDVD-Clone 2.01b31 with the "Unlocked features: ProDVD Clone". It worked.
So for the moment, on my system I have an expensive (but fast) CD writer, and some more investigating to do.
Update (later on same day): Tried this at home on a Suse 9.3 system with the lastest updates and the latest k3b ... failed with the same error. But this was interesting: If I issue the command cdrecord -scanbus as a normal user I get
cdrecord -scanbus Cdrecord-Clone 2.01 (i686-suse-linux) Copyright (C) 1995-2004 J?rg Schilling Note: This version is an unofficial (modified) version ... Linux sg driver version: 3.5.27 Using libscg version 'schily-0.8'. cdrecord: Warning: using inofficial libscg transport code version (firstname.lastname@example.org '@(#)scsi-linux-sg.c 1.83 04/05/20 Copyright 1997 J. Schilling'). scsibus0: 0,0,0 0) 'SAMSUNG ' 'CDRW/DVD SM-332B' 'T403' Removable CD-ROM 0,1,0 1) * ...
but if I issue the same command as root, I get
cdrecord -scanbus Cdrecord-Clone 2.01 (i686-suse-linux) Copyright (C) 1995-2004 J?rg Schilling Note: This version is an unofficial (modified) version ... scsibus0: 0,0,0 0) 'PLEXTOR ' 'DVDR PX-716A ' '1.03' Removable CD-ROM 0,1,0 1) * ...
Apache release WS-Security Implementation.
Davanum Sriniva has pointed out that Apache have released their WS-Security implementation (thanks Marta).
A couple of weeks is a long time in the "is there a patent problem or not world". Two weeks ago I reported problems with the patent status of this activity (actually via Davanum's blog).
There is a long email conversation about this here (which covers IBM's position). There is a short email here covering Microsoft's position. The longer email conversation is interesting in that it covers a bunch of hypothetical situations, and exposes some fragility in relying on the Apache license (which protects the user from code contributor misbehaviour, but not third party patent encumbrance). However, Apache do have this statement:
Any known encumberance, such as Patent claims/required patent license, is an IP issue and covered by the general Board directive; Circumvent or Terminate.
(from this email). They didn't terminate. Which means of course that Apache believe WS-Security is safe (all three parties who were obvious candidates to have had patent/license claims have apparently stated that they dont have such patents, i.e. there are no known encumbrances). However, of course, the quote from Joseph Reagle I used last time is still valid:
Unfortunately, it's difficult for the patent status of anything to be very clear ...
This makes our legal folk nervous. They somehow think that if we write code ourselves we'll be safe, but of course we wont. I liked the comment somewhere in the long email conversation that any two lines of Java could probably be patent encumbered if you could find the patent. I suspect that's the reality of the software world. If no one is waving a patent around, and you don't know of one, you should just get on with it. So we will, and can use WS-security after all, which means our NDG roadmap is safe.
As an aside, I note that the Apache wss4j distribution supports SAML tokens, so that must mean the known SAML issues have been resolved too.
(Readers might wonder why we are so worried about patents when such patents aren't enforceable in the UK. The problem is that our legal people tell us that if we distribute software on a website where Americans could download, we could still get sued in American courts ... and might be obliged to defend. I think this is a rather small risk ... but if we can avoid such risks life is easier.)
Catching up on Real Climate
I've been remisss in my reading of RealClimate which is simply one of the best "blogs" around ... although I have to say it's less a web log than an interactive nearly peer reviewed journal full of great articles and comments. (Why do I say it's peer reviewed - because the articles get solid review in the comments. Why nearly? Because no editor comes down and makes decisions.) In fact, I find the comments are usually as interesting as the articles...
Solar Influence on Recent Climate
There is a tiny war of words in Nature this week about reconstructions of solar influences on climate. Muscheler et.al. (2005) claim that current solar activity is not particularly unusual in a criticism of Solanki et.al. (2004) (who themselves claim that the last few decades are unusual with respect to the previous 11,000 years). There is a reply by Solanki et.al which essentially states that Muscheler et al have used an inappropriate normalisation to get large amounts of historical activity. It's only a tiny war though, because all parties agree with Solanki and Krivova (2003) who demonstrate that even if the last few decades are unusual, they can't explain the recent warming.
El Nino or La Nina in the Pliocene?
Apparently (Wara et.al., 2005) the Pliocene should give us a good idea of what our climate could be like in a few decades, as many of the boundary conditions forcing the climate were similar then to what we see (or will see) today:
including first-order ocean circulation patterns, the Earth's continental configuration, small Northern Hemisphere ice coverage, and atmospheric carbon dioxide concentrations (about 30% higher than pre-anthropogenic values).
It would appear (from Wara et al, although not without controversy) that during the Pliocene conditions more like El Ni?o existed:
the eastern Pacific thermocline was deep and the average west-to-east sea surface temperature difference across the equatorial Pacific was only 1.5 ? 0.9?C, much like it is during a modern El Ni?o event.
This means that the existing atmospheric circulation is potentially not stable to significant warming, and there could be significant redistributions in the oceanic currents and major atmospheric circulations associated with greenhouse gase induced warmings. Indeed the final sentence of Wara et.al. makes reference to the possibility that such redistributions might already be beginning.